Jump to content

charlsmcfarlane

Member
  • Posts

    360
  • Joined

  • Last visited

Everything posted by charlsmcfarlane

  1. Sellers can’t influence the content of a review, but asking for a review is a totally reasonable thing to do.
  2. It’s a good score. I’m sitting at 9 too and am pleased with that. I’d love to get it to 10 but none of the advice or information sheds any light at all on what steps to take to get there. Also, nobody is perfect, so I think 9 is something to be happy about. Importantly though, even for sellers that have a good score, there’s a ton of questions that need answering and some things that need to change. I’m really curious about all the questions regarding extensions, revisions etc. but, for me, it’s all about the value for money rating. That thing needs to go.
  3. True. Although, I'm starting to get the feeling that the same could be said for any action, done for any reason.
  4. This is genius. I only really put up prices when I hit capacity but this is gold
  5. This is a good idea. However, I think getting everyone on the same page is an essential first step. Not the only step that Fiverr need to take here. They need to be able to answer everyone's questions, and fix what is broken. Empathy and clarity are the beginning of fixing this. Not the end.
  6. We can only hope. Was the order placed a while back?
  7. We want Fiverr to take this thread seriously and I wonder if this conversation might turn into a distraction.
  8. They could employ a system that incentivises sellers to really consider if the buyer’s review is valid before getting support involved. I mean, it sounds like bringing support into the fray now damages seller’s stats, so they could make it so that frivolous use of a review contest system could damage the seller’s stats. Conversely, contesting an objectively incorrect review should then not affect that seller’s stats.
  9. @Kesha I have a question about the seller communication metric. There are times when a conversation naturally comes to an end. Let's say I've been discussing something with a client in the order chat and the conversation ends and the buyer responds with something like "thanks". Do I now need to reply with something in order to stop the system from registering my lack of response to their message as some kind of problem? I've had situations where the it tells me in the order page "[insert buyer name here] has been waiting for an update from you for 11 hours", for example. Is that being logged as some sort of problem? Sometimes a buyer doesn't need a response. They could be saying, "thanks" or, "ok, talk to you later". Do I now need to open the order and always have the last word, or my communication metrics will tank?
  10. It's so silly that we're having to find workarounds, so we can do the work without getting penalised for revising orders or extending delivery deadlines.
  11. Every time I access my Fiverr account through a browser, it presents the banner shown in the attached image. It's nice to be congratulated on getting my "first" Fiverr's choice order. However, this banner comes up every time I go to the site and my first Fiverr's choice order was placed and completed years ago. I have completed 15 Fiverr's Choice orders in 2024 alone and have two in progress, so I'm not quite sure why it's happening. Anyway, this is broken and slightly annoying. Maybe not the most important thing to fix right now, but I thought I should flag it! Thanks.
  12. I think for me, and for a lot of people, the biggest problem with this new system is the value for money score. I’ve already articulated the issue I have with this measure, but I want to try and clarify what I mean a little better. This score is a combination of two distinct variables – cost and quality. Here’s an example to illustrate how this measure is broken: If these two variables were split into two separate survey questions and a buyer rated cost at 1 star and quality at 5 stars, the average for those two metrics would be 2.5 stars. Thus, the value for money metric would be 2.5 stars. However, providing this metric as a combined score, from a combined question, means that the score is meaningless. There is no way to know, by looking at 2.5 stars, whether a buyer was ecstatic with the work, but unhappy about the cost or happy with the cost but disappointed with the result. There are these new tags that provide a little more context, but not enough, and they are only visible to the seller, so useless in providing context to potential buyers There’s a huge spectrum between those two extremes too. 2.5 stars could also mean that a customer was moderately happy with both the cost and result. Basically, the value for money score is meaningless to both sellers and buyers because it provides not nearly enough nuance of the buyers’ actual opinions. The only way that the score has any meaning, in its current state, is if it’s very low or very high, but the only way to get a very high score would to deliver exceptional work and undercharge for it, because it’s human nature to think, “yes it was amazing work, but it could have been cheaper” – and of course it could have been cheaper. That doesn’t mean it should have been cheaper. So to win on “Value for money”, sellers are incentivised to drop their prices without dropping the quality of their work. Then the more that sellers drop their prices, the more buyers will expect to be able to get exceptional work for cheap prices, so they’ll continue to review “value for money” poorly and sellers will have to continue to drop their prices indefinitely. I can only see this being a race to the bottom on price, meaning decent sellers that deliver quality work will start to lose revenue (and so will Fiverr). If these two variables must be surveyed, they should be surveyed separately. Personally, I actually don’t think that cost should come into the ratings at all, because the buyer agreed upon a cost at the beginning of the exchange. If they thought it was too much money, they shouldn’t have paid it in the first place – and if they paid and are unsatisfied with the result, they can leave a poor review about that result, or discuss other options with either the seller and/or Fiverr support. Again, if “cost” must factor into the ratings, I wouldn’t even mind getting low scores for it, because I know I’d get great reviews for “quality”. If you want good quality, you have to pay for it, and if you want something for nothing, then I don’t want to work with you anyway. Now, I’m aware that this is not a new rating. This rating has existed in private reviews for a while, and we are only now seeing the metric. My argument is that this is a flawed metric in general and should never have even been in the private reviews in the first place. Besides, this, I feel there are other issues with these new updates, but, for me, they are just little hiccups – bugs that need fixing and algorithms that need tweaking. The overall spirit and concept of this new system is great. I wouldn’t want Fiverr to roll this back to the old system because this is a step in the right direction. Unfortunately this value for money rating and a few other little things are definitely a problem and need to be changed. Thanks for listening to your community of sellers on this. I understand that the people responding in this forum are probably a small sample size, compared to the total number of sellers that are on the platform. However, when everyone in this small sample size are saying much the same thing, it’s not unreasonable to extrapolate the consensus out to the entire population, and infer that it’s pretty universal. Again, thanks for listening.
  13. I'm dealing with a buyer right now who has asked me to extend the delivery deadline several times because they need more time. I’m more than happy to do so to help them out. I also have other buyers who have ghosted me for months so I request extensions to keep the order from going late. Are you telling us that accommodating customer requests or requesting extensions because the buyer has gone quiet can hurt our seller success score? If so, that needs to be changed. The suggested solution of making the delivery date really long makes no sense, as a ridiculously long delivery date will be off-putting for buyers.
  14. This is true. It’s about 1 post every 7(ish) minutes for the last 28 hours. That’s got to be hard to keep up with. Thanks Fiverr team for taking everyone’s thoughts seriously and providing this space to share feedback.
  15. This suggestion has nothing to do with private reviews. I’m talking about objective, system-recorded metrics. For example, maybe in order “x”, which has been flagged as a negative for communication, I review it and see that my communication score was impacted because I didn’t respond to a buyer within 24 hours. That’s something I can identify and address. The buyer is irrelevant. If you review the original message I sent, it adds the extra context.
  16. I offer training sessions and regularly have buyers ask to reschedule to a time after the delivery deadline. In this situation, I request an extension to benefit the buyer. I also have people that place an order and then disappear for months on end. Does this mean extending the order to give the buyer more flexibility in this way is hurting my stats? I really hope not.
  17. I think you misunderstood my point. There are lots of metrics that fiverr are using (cancellations, disputes etc) that have nothing to do with private reviews. It could suggest orders that it’s identified that have swayed the scores using objective data, rather than private reviews.
  18. For the most part, I like the look of this new level system. I think the theory is great, but some of the execution is not. Here are what I believe to be the main issues, in order of urgency. 1. the value for money rating is ineffective and will result in a race to the bottom on pricing. Quality and cost are two separate variables so should be surveyed independently from one another. I would be absolutely fine with a buyer saying that the work was fantastic but that it was expensive, because I charge what the work is worth. Putting both variables into a single survey, could cause the quality of the work to be called into question when it was actually a cost concern. 2. A generic guide on how to improve one’s seller success score is insufficient information. Sellers should be provided with specific examples of when they could have done better, so they can learn from the experience and improve. Providing a generic guide is like saying, “find the needle, but we’re not going to tell you, which haystack to look in” 3. Presenting the seller success score as whole numbers is not enough information. My success score shows as a 9 but I have no idea if I’m on the edge of dropping into an 8 or only just below a 10. Obviously I’m going to keep pushing to improve this but I don’t have enough information to know where to focus my attention. Will I wake up tomorrow and be at 8? Who knows? Please present the score with at least one decimal point. For example, “9.6”. 4. A month-long transition is not enough time to allow people to improve their score after potentially years of data dropping into the system by which they are measured. 5. none of this information is available to me on the mobile app, which is the way I do most of my checking and messaging. Thanks for considering these improvements. I don’t believe this update will ever be rolled back, so I don’t want to complain about it. I’m trying to offer suggestions on how to fix what is broken/unfinished.
  19. Precisely my point. If the system is objectively measuring these data automatically, and they’re unrelated to private reviews, surface some examples of where the seller fell short. Example: “Delivery time – Strong negative impact Here are three orders that have impacted this score negatively. Take some time to review them and consider how you can improve in future. Order 1 Order 2 Order 3” This doesn’t highlight a specific buyer as having given negative feedback. It just gives a seller specific orders to review, rather than a page of generic tips that don’t help. It also reinforces to the seller the validity of the score, because the system can cite multiple examples of where the seller fell short, rather than just giving a score that seems completely at odds with every statistic that seller has ever seen.
  20. Yes, I wouldn’t expect to see anything regarding private reviews but there are more objective measures that could be surfaced – orders with ‘conflict’, for example. Some of these data are generated by objective metrics rather than only private reviews.
  21. In my opinion, this new system is not providing nearly enough information for sellers, in order for them to improve. It seems that everyone has these reports of negative impacts that don’t correspond or correlate, at all, to the stats that they can see. I understand that some of this information comes from private reviews and they need to stay private, but if any of these scores are coming from objective metrics, please show us what that metric is and where, specifically, we have fallen short on it. The help page it sends us to is basically a ‘how to not be a terrible seller’ page. Unfortunately, the tips on this page are practically useless for a lot of us, as they give various tips that are absolutely obvious and completely in-line with the way that any decent seller is already working. Rather than saying things like, ‘ensure you are communicating regularly with customers throughout the order’, the system should be highlighting specific situations where we objectively did not do that. For example, show specific orders where the system has identified an issue, so we can review the order and make an effort to understand where we could have done better. The system should be able to surface this information, seeing as it must be referencing it anyway, in order to establish the score. This is one example of how the system could provide actionable feedback, rather than generic guides that don’t really help. Basic analogy – you’re building a Lego set and it has 100 steps. You get to the end and the model is wrong. It’s way more helpful if someone can tell you, “you made a mistake at step 34”, rather than just giving you the instructions again and you have to work it out on your own. My success score has landed at a 9, which I’m relatively happy with, but even still, I can’t glean any useful information from the new metrics about how to bring my 8/10 gigs up to 9s and 10s, because the guidance is way to generic and, in some cases, completely at odds with the data we can actually see. I think making improvements to the information sellers get and how they can turn that into actionable feedback will make these new updates much more useful.
  22. This is one of my biggest concerns. If you ask someone about the quality of the delivery whilst also asking them about the price, people will naturally think, “well, it could have been cheaper”. Delivery quality is a variable and cost is a variable, so they should be surveyed separately.
  23. This is a real concern. If the value for money rating has the effect I’m foreseeing, it will make it so people won’t be able to make good money anymore and may just leave. As I said, I hope I’m wrong.
  24. There is not enough information. It seems that everyone has these reports of negative impacts that don’t correspond or correlate, at all, to the stats that they can see. For example, my second highest performing gig, which is Fiverr’s choice for many search terms, has 89 reviews and 100% of them are five stars. That same gig on the new success page quotes “buyer satisfaction” as a significant negative aspect of that gig. So, this clearly isn’t drawing any information from the actual ratings. Either it’s drawing conclusions from private ratings, or it’s calculating this information based on some other objective metric. If it’s drawing this information from an objective metric, then it should highlight specific orders that we can review in order to ascertain how we can improve. I understand why the system would not highlight specific orders where a customer has given negative private feedback, as that would negate the privacy element. However, if this score is coming from some objective metric, show us what that metric is and where, specifically, we have fallen short on it. On a related note, the guides explaining how to improve one’s score, are practically useless, as they basically explain “how to be a good seller”, giving various tips that are absolutely obvious and completely in-line with the way that any decent seller is already working. Rather than saying things like, “ensure you are communicating regularly with customers throughout the order” (obviously!), the system should be highlighting specific situations where we objectively did not do that. Highlighting these situations would not need to encroach on private reviews as the number and frequency of messages sent, and how much detail is in those messages can be objectively measured by the system. This is one example of how the system could provide actual, purposeful, actionable feedback.
×
×
  • Create New...