Jump to content

vhskid

Member
  • Posts

    89
  • Joined

  • Last visited

Everything posted by vhskid

  1. This isn't up to the designer to decide about. In my experience, most of the time companies are going for interfaces that are "useful and pretty" for the majority of the population. Forcing the accessibility increase aspects on the main design version would cause (for me) the project to get rejected or flooded with revisions. If I had a choice and budget I would be creating 2 versions of the interfaces. But the reality is different. Thankfully at least the are widgets like Userway and browser extensions.
  2. As a UX / UI designer with over 15 years of experience, I don't agree. Colorblind and vision loss interface versions are much less pleasant to look at and use for "regular" folks. I don't believe in forcing the majority to use something accommodated to the minority. I'm more "let's have stairs but also elevators and ramps" kind of guy. Thus the separate / accessible interface version argument.
  3. There are different degrees of color blindness, so it's not like every person affected doesn't see any color at all. While I agree that creating accessible interfaces for those with disabilities is important, designing one to be efficient, useful, attractive, and accessible to everyone is impossible. That's what contrasting / accessible separate interface versions should be for. This being additional cost makes it rare, and people with disabilities don't have it easy online as well.
  4. They do want to move away from stars in the reviews form, to have more nuanced feedback. It's like you wrote yourself earlier... Sure, because that's how user testing works for web / app interfaces. I'm a UX / UI designer with over 15 years of experience, but what do I know?
  5. My approach still uses a progressive scale with text labels, so the deficiencies in color perception are not a factor that would interfere with the user experience and won't skew the results of the survey. Emojis on the other hand are very much troublesome due to the reasons I mentioned in my first post.
  6. I considered the cultural differences aspect when contemplating the design ideas but my conclusions were in favor of the color pills approach: This is a progressive scale with text labels and in this context and design form, colors aren't impactful enough to steer the user in the wrong direction It's not like there would be scenarios such as "Oh no, the red color lured me and I chose the wrong option" Traffic light colors are universal enough around the world to make use of red and green as starting points / base here Cultural differences are important, but going that pathway of tiptoeing around them to the highest extent, when considering color for globally accessible interfaces, we'd end up with gray-only websites and apps. The culturally sensitive use of colors in design is more crucial in larger contexts with photos, illustrations, key visuals, and bigger storytelling like videos. Fiverr wants to move away from stars, so by not using them I'm playing a bit of devil's advocate. Will this approach (in general, not with emojis) be more accurate here than stars, when measuring buyers' satisfaction? Only a bigger user testing with a large control group could really tell. I myself am on the fence right now. Having said that, comparing these 2 non-star approaches, my design solution uses a more straightforward progressive scale and translates better to stars than emojis do.
  7. Among the recent big (and bad) changes, we have a full look only at the new reviews system. Given that we don't have to speculate and guess about this, I went ahead, broke it into pieces, and assembled it again in the form of a UX audit and redesign. I wrote a separate post, so it won't get lost between the memes: [ UX audit + redesign ] New Rating & Review System Ideally, we'll have separate threads for every gig rating metric, because these subjects are just too big to discuss them all at once.
  8. Heads up → This is a long one Hi there, With the recent big changes in the seller / gig rating system, a new order reviews system came with it. While both seem like they are flawed on many levels, we (sellers) have a full look only at the new reviews system. As a Fiverr seller and freelance Product Designer, I decided to express my concerns regarding the new reviews form (in the form of a UX audit) and provide improvement propositions by creating new designs. → Introduction Unfortunately, the logic and execution of the new reviews look like the effect of a 10-minute brainstorming session plus some heavy hours dedicated to creating a PR ideology around it. I see many wrong things with the new review form and want to address them all. I'll use a lot of YOU pronouns, which are directed to Fiverr executives and teams responsible for the discussed changes. So this post can be treated as an open letter to those who are in charge and have the influence on implementing the platform's functionalities. Not like I’m holding my breath for a response, but maybe this will reach a few relevant inboxes. I'm attaching also → PDF viersion ← with hi-res images. → 1 - Emojis This design decision causes problems on many levels... Behavioral Science Perspective A person's emotional state is a complex mixture of influences and codependencies. Many things affect our moods and feelings in the short and long term. Using the basic emoji faces expresses a specific momentary emotion / mood as a reaction to something. Given that I see a few issues: Asking people to rate something with an emoji is forcing them to fight against their momentary emotions and overall mood to provide 'fair' feedback - if I don’t feel excited 😐 then I have reservations against using the exited emoji 🤩 for the 'Exceptional' rate Emotions are whimsical, tricky and affect us all the time - not only on a conscious but also subconscious level - so emoji feedback is doomed to be skewed from the get-go Emoji rating also gives the impression that it doesn't have significance and that it won’t affect the freelancer much The reports about the insignificance issue problems are starting to trickle in: Source → → 2 - The Scale In what reality on a single-step scale, the next grade after 'Average' is 'Very good'? With the old rating form, the tooltips visible when hovering over the rating stars were also not balanced (4 stars meant 'Good' and 5 'Excellent') but the stars were much more straightforward so it didn't matter that much. But here the labels are deceptive. In user experience / interface language, the word for this kind of diversion is 'dark pattern'. Few points on why hacking the scale is disruptive, not constructive: 5 ⭐ meaning 'Very good' / 'Great' is universal for the customer experience online 'Exceptional' on the other hand has bigger significance and feels more like 5+ because it stands for special / extra / unique / rare On occasion, we encounter the 'Excellent' label being used for 5 stars, but it has less weight than the 'Exceptional' as well Buyers have been exposed to 'classic' 5-star ratings too long and often, to anchor an exception in their minds for your modified system and create Fiverr-specific new habit The ‘Exceptional’ scenario should be an additional distinction because people will just not get used to considering the 5th star / maximum rate as something better than “Very good” There is just too big of a semantic gap between 'Average' and 'Very good' The cultural differences will limit the use of the 'Exceptional' rate as "the best" We didn't have to wait long for the cultural differences issue to cause problems: Source → → 3 - Perfect 5 / 5 Being "Not Trustworthy" In the new review system looks like your goal was to people not rating the highest when it should be "We want to create the review system that reflects the real buyer experience". In the responses on the forum, your staff keeps mentioning research to support the reasoning behind the changes. But this looks rather like a research bias scenario. I get the company's premise about highlighting exceptional work… Source → …but that can be done differently. More on that later. As for "relieving the pressure of aiming for a perfect 5-star rating": The pressure will always be there - in a competitive marketplace sellers need every advantage they can get, especially with such an extensive algorithm that Fiverr has You increased the pressure because buyers are now rating lower, due to the confusing new rating form You increased the pressure because we know now that our gig scores are dependent on how our competition is doing If my performance as a seller is consistently great but not continually improving, then my score can go down because others' can go up You increased the pressure by causing confusion due to the way you rolled out and are handling the new levels / rating (pre)launch The approach "4⭐ is the new 5⭐" is misleading users (in the rating form) and buyers in general (in the published reviews) because you changed the rules of the game while the past reviews and new ones are in one pot The sudden drop in ratings looks like a drop in the seller's work quality → 4 - Optional Questions The way the supplementary questions (dependent on the selected rate) and their answers are displayed is misleading: The answers look and sound the same but their selection has different effects depending on the selected rate option There are no visual distinctions whether the question asks about negatives or positives There is also no indicator that the question is optional The way this is implemented will cause scenarios like this: The user reads the first optional question and automatically assumes that the others have a bit different answers, but concern the same instance - positives for example The user is under the impression that is selecting answers in all situations for positive metrics, while they really concerned the negative ones The same can happen if the user won't read the supplementary questions at all and will select all the answers as positives. There are already reports about this: Source → → 5 - Real-life Consideration All the above issues are enhanced when the buyer is in a rush, which is a common scenario. Who isn't busy today? Clicking fast through the "unimportant" questionnaire or pop-ups to be over with them while not paying much attention - who among us hasn't done this? The earlier shared comment about emojis rating insignificance also mentions quickly clicking through aspect: Source → → Solutions A better design approach is actually in the new levels' landing page: RATING SCALE A simple scale with colors looks balanced and intuitive: QUESTIONS And now the tricky part with supplementary questions. Wording When considered negative, the answers should imply that with their wording: Visual Indicator The selected state should show visually if the response has a positive or negative connotation: This is way more intuitive and straightforward. SCORE PREVIEW Before sending the rating, there should be a noticeable score preview that with clarity shows what will be published: BONUS ROUND - "making exceptional work stand out" The "exceptional" aspect of order delivery can be determined in a non-direct way. Spontaneous Reactions The Amount Approach When all 3 main questions are rated to the highest, then the number of selected positive (optional) question responses could additionally affect the score. So let’s say 50% of selected responses for a single question could add 0.1 ⭐ to the score. The threshold size and star fraction value would need a careful assessment. The Meaning Approach Another approach could take into account the occurrence of specific response selection. The "Went above and beyond" itself indicates exceptional work. In the above scenarios, the 'exceptional' rate distinction would be spontaneous (yet measurable) and the lack of it won't interfere with the 5-star rating standard / habits / previous system scores. Labeling With either approach, there will be 5+ ⭐ rating possibility and there could be a label / badge indicating this: If the overall rating of the gig would be 5+⭐, then the 'Exceptional' badge could be included with the gig main stars score (in all the applicable places). → Wrapping up The devil is in the details and I feel like we need exorcisms here. I hope that the above concerns won't fall on deaf ears. A guy can dream. Signed Concerned Seller CC: @Kesha @Lyndsey_Fiverr @ran_success ____________________________________________________ FAQ section for community Why didn't you mention private reviews? - This is a separate can of worms. Why didn't you mention the "Value for money" rating question / issue? - This topic deserves a separate thread considered not only as a question in the rating form but also as a gig score metric. All gig metrics should have their own respective threads because these subjects are just too big to discuss them all at once. ____________________________________________________
  9. 2 of my gigs lost their scores 1-2 days ago and the feedback from Customer Support about this is that these gigs didn't have new orders for some time (close to a year actually), so the scores disappeared: So I wouldn't bet on insight-driven fixing but rather duct taping.
  10. Yes, but baiting and switching isn't a very viable business plan long term. They can sell a subscription once, for one month, but that's it. Then what? Just because they didn't prepare properly their staff or they didn't know that the algorithm was flawed to this extent, doesn't mean the plan couldn't look like this. One way or another someone dropped the ball here.
  11. That's not a solution. Having a SM won't help you in any way with this system. They don't have any extra information on this topic, it's basically equivalent to talking with CS. Don't tell anyone, but giving the actual solution is not an objective in this secret masterplan
  12. Sorry to disagree. None are good intentions. We're paying the price for Fiverr not doing what it should do. The whole thing seems odd like it was premeditated. Create a problem → Release new levels system and degrade some sellers + confuse buyers to start giving lower public reviews Generate demand → Cause GuidanceThirst™ by giving sellers anxiety and a superficial peak at how the gigs are rated Give solution → Direct people to 'Success Managers' Earn tons of cheddar → Cash in on the increase in the number of 'Seller Plus' subscriptions <evil CEO laugh> I can understand that creating an intelligent and fair system on that scale is very hard, and not every nuance can be accounted for. But this whole algorithm logic doesn't add up. Especially since it was developed and working over the course of a longer period when there was plenty of time to gain first-hand insights into how sellers and buyers are operating given the specific functionalities.
  13. Can someone confirm that? There are clients that are not responsive for months. So either you can cancel the order or try to extend and hope for the reply. Either way you get negative impact? If this turned out to be true, then the logical would be to use the extension comment to point out (to the conflict moderator / AI) that it was caused by the buyer. But that would be assigning blame and influencing the 'Conflict-free' metric anyway or weighing down the 'Effective communication' one. Win-win
  14. An option for those who have big-quantity smaller-volume orders and regular / repeat customers. This is not a scenario for every industry / offer.
  15. Fr, let's keep the replies in line with the topic. I second that (being on the topic) especially since there are many too loosely topic-related replies or random ones
  16. Why did @vhskid assume that cockroaches meant people from developing countries? Kinda racist. The developing countries part was one of the examples of why someone might not have many opportunities. Pulling it out of context like that is manipulation. Regardless of scenarios for someone's life situation, opportunities and / or reasons for their motivations naming anyone cockroach is for me going too far.
  17. That says more about you than about me, I didn't refer to any of that. Regardless of where they're from, and any other factors, someone who says they can do X and can't, and says they can speak X and can't, is a negative for the platform. If there are millions of such people, it's indeed like a cockroach infestation. This is just one of the examples of why someone might not have many opportunities when growing up. Regardless of scenarios for their life situation and / or reasons for their motivations, naming anyone cockroach is for me going too far.
  18. I would call the bolded text an observation close to how things are and an objective statement. I have the same observation and some hard statistics would probably confirm what you said from the statistical point of view. However, this is something else: If someone doesn't have a talent and / or for example was born in a developing country and / or for other reasons didn't have many opportunities when growing up, this doesn't mean they don't deserve some basic dignity. Is lying / cheating / conniving bad? Yes, but cockroaches? Really?
  19. Facts are not insults. These are not facts, but your private remarks and negatively stigmatized expressions like 'cockroaches'
  20. I think that insulting anyone isn't good on any occasion and hate towards any people isn't helping anyone
  21. No, you need to be logged into your account to see it. Refresh the page and check whether you're logged in. Ummm, if I'm not logged in how am I posting on the forum?
  22. I myself am doing this for revisions, due to the complex nature of the projects and many moving parts. But now I'm worried that the new system will monitor the time passed between (re)deliveries and that will affect the gig score metrics. Clients involving their team to review delivery is frequent for (not only) my industry and oftentimes it's a bigger / longer discussion where some things need to be explained and established, because surprise surprise - not everything can be specified in detail beforehand. In my seller's dashboard, the delivery time would update automatically by adding a few days (calculation based on what, I don't know) when the order entered the revision state. Then the order was flagged as LATE when the new delivery date was missed. On many occasions, the label would even go string into LATE without the occurrence of a new deadline, because why not? So now it feels like I can't sigh in private to not get penalized by the new scoring because AI-Big-Brother will monitor and misinterpret everything.
  23. My extension requests are usually because I'm working on something with the buyer and they need more time to provide the full complement of materials, for example, because they are a *business* client and are having trouble scheduling a zoom briefing with their coworkers. That's exactly my case. The extensions in my orders are caused by buyers. I always add some time reserve but there is no way to anticipate everything and longer delivery times can easily scare off the client.
  24. Oh, they DO know!!!! That they don't want to admit it, that's another story. Aside from copy-pasting evasive responses from CS, I was also assured that: Which I could compare to magical thinking because nothing is perfect. Even here, on the first page of this topic - after only just 12 hours of pre-releasing the new system to the buyers - @Kesha admits that "cancellations rectified by Customer Support affecting success scores" is an oversight and they are "working to resolve it before the transition period concludes". On the other hand, in a similar forum topic about the new levels release (now closed), another Fiverr representative directed me to CS to send a misjudged gig score for review. This is strange that in public (forum) the willingness to help / explain is better, while in private (support tickets) we get PR responses.
×
×
  • Create New...