Sep 10, 2024 05:08:37 PM Edited Sep 13, 2024 02:35:41 PM by Stan G
Stan here 👋 Coming out of another thread that had a lot of questions, I figured I would just post this, and be an open book (as much as I can). Whether you’re new here or have been hanging around for a while, I’m happy to answer your questions, chat about what’s happening in the community, or just hear what’s on your mind.
A little about myself... I was reminded that it is my 3 year anniversary with Upwork this week. I'm about an hour south of downtown Chicago, working remotely. Wife, two kids (7yr old girl and 3yr old boy). I'm a Lego addict, Diet Mountain Dew addict, and going to be visiting Halloween Horror Nights soon 👻
I oversee our Community team which includes our engagement team (Moderation), our Programs team (Groups, Podcast, etc), Community tech team (our Community engineers and tech/product manager), and our Customer Support chatbot / AI Innovation work (3 folks).
With that all said, Ask Me Anything!
Sep 10, 2024 05:25:09 PM by Ravindra B
Wow! That is a lot on your plate.
I have a question: Who decides/decided that we freelancers have to use a chatbot for getting support?
Can't at least the top-rated freelancers have email access to support?
Sep 10, 2024 07:48:33 PM Edited Sep 10, 2024 07:49:22 PM by Stan G
Really just jumping to the harder questions. Can I answer it from two different perspectives? From an extremely high level as this is not my department (guess that is a partial answer), we're constantly testing different ways to provide support, and this is often personalized to the individiual. As Upwork changes, we evolve at the same time.
Taking off my Upwork hat, and putting on my outsider Stan hat and just talking about operations in general, as I have been running CS teams for much of my career, just not as intimately involved here - Customer Support teams generally want to limit the amount of folks reaching out to a CS Agent who are asking questions that are readily found on their help center. Labor is one of, if not the, most expensive cost in running many businesses. Customer Support teams are often large so it adds up quickly. When the same question is asked 100 times a day, yet the answer every single Agent is going to send is a copy of the answer found on their help center, why would any company want to spend time answering this? This is why organizations look to limit direct access to an Agent.
A comment/question on email - I don't think it's the future, and trying to spend resources on expanding the use of it is not doing anyone a favor. Instead, finding ways to expand live chat, providing real-time assistance so nobody has to wait for a response, is the ideal dream state I would imagine most customers of Upwork would prefer to get to?
Sep 10, 2024 08:33:41 PM by Ravindra B
Yes, labor is expensive.
What I meant was that email access to Upwork Support would not only obviate the chat with a chatbot but also provide direct access to a human.
Indeed, if an answer is already available and easily accessible, using live support is wasteful.
If the chatbot were smart enough (as some people have said they have experienced elsewhere), then there would absolutely be no need for direct access to a human.
Sep 10, 2024 09:53:43 PM by Jeanne H
Thank you for starting this thread.
I have a question. Are the community guidelines programs controlled by Khoros or similar, or are they controlled by you or Upwork or?
Thanks!
Sep 11, 2024 12:51:23 AM by Stan G
The guidelines of what we allow in Community? That would ultimately roll up through me. I'm guessing there is another question coming 😀
Sep 11, 2024 04:57:25 PM by Jeanne H
Yes, I have more questions! 😁 Thank you so much for answering our questions. I wish we had your valuable input much earlier, but I'm very grateful you will answer questions. I have had posts with things "edited for community guidelines" that I can't fathom the reason. Then, when I asked a moderator, everything was "edited for community guidelines." in the message to me. In several, there was nothing personal, against Upwork, and nothing I can find in the guidelines. I find the "editing" to be inconsistent. I have seen nasty posts that remain until multiple people complain and complain, and there are still some nasty posts about people that are not removed. I fail to understand the inconsistency. It isn't just words or phrases, so help me understand.
So, to be clear, you/the program is issuing all of the community guidlines editing? The moderators do not have anything to do with it? I'm not looking to be angry at anyone; I want and need to understand how this forum functions, because someties it seems personal or arbitrary. Is it a program controlled by you? How do you decide what is "against community guidleines" and what passes?
I do understand you have a job, so I appreciate your answers when you have the time.
I also understand about the override and appreciate it.
Sep 12, 2024 01:53:47 PM by Stan G
I have always wanted to rename the guidelines to be "The Community Code", as I always want to use the below image/quote. I've honestly had very little to do with the written guidelines themselves, Val used to guard those closely and not let me near them, and I don't think we have had any changes since she departed last year (I say this jokingly but also very seriously 🤣). It's not a perfect system, and a lot of what our moderation team has to go by is based on quick judgement calls. We don't have any huge additional guidelines behind the scenes really, what you see is what we have. We do have a very large global community, which presents many additional challenges from language barriers, varying cultures, spam/scams, etc.
I'm doing napkin math, but our moderators are responsible for roughly around 30 posts per hour. They'll almost always be on shift alone, so many posts will have little to no time spent on them and the team often doesn't have the ability to bounce ideas off one another. As a result, a handful of posts will take up the majority of their shifts. It's not that the team has any negative intent towards anyone ever, but it's a constant learning for everyone. Half of our moderation team is also new over the past few months, so we're also in a time of change.
Sep 12, 2024 04:48:39 PM by Jeanne H
Is there a way to return or alter the font when writing posts? This, too, is part of acessibility. It is the same font as what appeared in the forum.
I'm dense I guess, but I still do not understand what the code does and what the moderator does. Can you please break down the duty? Does the code kick in first? And if it does, why does it allow real bad words through until a moderator is nice enough to remove it or it stays? In many cases, there is no need to read the entire post because the violation is in the first few words. Thanks!
And thank you for listening and answering our questions!!!
Sep 13, 2024 02:41:52 PM by Stan G
Font will be fixed everywhere next week when Djole (our engineer) is back from vacation. I just put in some small fixes where I could.
When you say code, you are talking about filtering? There is multiple layers that act as a sort of filter.There is automatic spam filters, similar to how Gmail will put things into your spam filter automatically. We then have additional rules (regex) which is used as an additional check, this is where we manually would block words like "coinbase" as it constantly is used for spam in our Community. And then finally our moderators step in if they either notice something breaking the guidelines, or if a post is reported to them.
Sep 10, 2024 09:59:46 PM Edited Sep 10, 2024 10:00:02 PM by Jeanne H
Why does the program kick out posts for certain words? I thought I undersood why this happens and then it happened again.
Thank you.
Sep 11, 2024 12:58:16 AM by Stan G
We have a bunch of words blocked right now that are common for spam/scam posts (crypto companies, a bunch of airline names as for whatever reason that's the popular spam topic of the month, etc). If there is a certain word that is really causing trouble you notice, feel free to post a broken version of it here and we should be able to fix it up. (On one hand I could post the entire list of what we filter for words, on the other hand it then allows the spammers to know exactly which words to modify. It's a game of cat and mouse).
We do have automated spam filters in place that usually won't cause issues, but it's kind of like email where some still get few occasionally, and we constantly are fighting those individual cases as once one gets through, 50 more topics get posted like it.
Sep 11, 2024 03:05:40 AM Edited Sep 11, 2024 04:54:48 AM by Radia L
That would ultimately roll up through me. I'm guessing there is another question coming
Hi Stan, as rule creator you can create something like this. If you don't want to, care to tell us the reason? The reasons mentioned in that page for banning ChatGPT pasters are valid. As I mentioned in the other thread, I’ve seen a few wrong answers go unnoticed for whatever reason (where for me I chose not to correct them on purpose).
Chatbots
I've encountered a number of smartbots as CS where some of them can really help in reducing labor without sacrificing user experience. Upwork's, is not part of the good ones.
I think it's important to tell the users to try to use better prompt or rephrase questions as best as possible, and/or replace the bot with something better (or more expensive I don't know) because some of the bots are still 'dumb' even though we've provide it with good prompts.
I have a client that wanted me to work on their Uscreen where I have zero experience about it. I chat with Uscreen CS bot, which was a very good experience. It can answer all of my qustions, instantly (so it's better than humans), except on some API-related ones which even the "first human CS tier" need to escalate it to the "second tier" ones (with longer response time). So basically I never need to talk to the first tier ones, the bot can replace them completely in my case, where this is not the case with Upwork as we see there are so many people complaining.
Sep 11, 2024 03:07:48 PM by Stan G
Re: Banning AI. The problem with this is not if we want to ban it, it is a much larger problem of how. There is no way, nor will there ever be, a way to uncover what is AI generated vs. what is not. There is endless tools that claim to be able to detect this, but the fact is nothing can. Another reality is as humans, we provide the wrong answers just as much as AI does (or quickly getting to that point). What I think you will start to see is we will moderate content stronger, requiring more high quality content as a way to combat this. We have historically allowed any content to be posted as long as it does not go against our community guidelines, but AI definitely does make everything more difficult when it comes to judging content. As a result, I think communities in general are going to be forced to take a harder look at what is considered the best for the overall community, and ensuring it meets higher standards.Long story short, we can't ban what we can't see. (Does this answer it at all?).
Re: Bot intelligence. We're still in the first inning moving into the second inning so we have a long ways to go. What we see today is nothing like what we will see next year or the year after. The complexity of what a chatbot needs to support definitely plays into how great of an experience it will provide. As you said, Uscreen was a good experience as it had all of the answers for the most part. That is the exact experience we hope to replicate here, with the bot able to answer the vast majority of questions, with it escalating it up to an Agent if and when necessary. We are not there today, but getting better by the day! (Our chatbot a year ago was supported by someone doing it part time which is normal for most organizations, we now have 3 people as of a few weeks ago, so our goal is very much to continue to improve this at a rapid speed).
Sep 12, 2024 03:12:08 AM Edited Sep 12, 2024 12:46:27 PM by Radia L
we will moderate content stronger, requiring more high quality content as a way to combat this.
If you acknowledge that there is a problem, then you can find a solution. StackOverflow has their own technical way to "detect" AI post (which is prone to detection problems you mentioned), while what I'm suggesting is to simply use human judgement. This is not a democracy, you (mods/management) own the place, created the rule, so you can ban anyone at your sole discretion if you think they're bad.
I often wrote that I decline any phone/video requests unless they are just for "confirmation" purposes, because, I don't speak and understand spoken English well. But, I most of the time can tell, if someone pasting content from ChatGPT, and I'm sure mods with their better English can do the same.
It's up to the mods. I trust their judgment. If you can admit that pasted ChatGPT text is bad for the community, where it could probably also fall under the "disruptive posts" category written in your community guidelines, mods could delete/lock the posts and warn the users, telling them that the post is considered disruptive and they will be banned if they keep posting them.
If this post is reported and no action taken, that's fine. I trust the mod's judgment. But if this one is reported and no action taken, it means the mod needs more time to familiarize with the forum, because such posts doesn't need to be reported as it most likely fall under the 'weak/low quality content' category aside from 'disruptive'.
Problems in discussion forums aren't necessarily caused by violations of (your) community guidelines. Some polite and grammatically correct persons were born with potential to turn any topic into a flame war, or at least, annoy other, more valuable users. This was true even before ChatGPT was invented.
Sep 12, 2024 12:14:56 PM by Sein M
Hi Stan,
I've no questions at this time but I would like to thank you very much for your presence and engagement here, it really helps culitvate a healthy environment in the forum.
Best wishes.
Sep 12, 2024 03:22:17 PM Edited Sep 12, 2024 03:24:35 PM by Bilal M
Hi Stan, thank you for creating this post, and offering to answer questions as an "open book".
I was curious to know what you think about closing out important community threads about major issues or updates to the platform with announcements, majority opinions, or main talking points brought forward in the discussions. Especially threads where hundreds/thousands of comments are posted.
The closure could be done organically (Upwork staff reading through comments) or using AI to assess what people said, and reaching a conclusion. OR, by using polls, or a combination of both, or other methods.
For example: 1000 people shared opinions. 70% of them are in favor of A, 15% in favor of B, 15% in favor of C.
The basic idea is that having closure on highly talked about threads about platform changes, will show Upwork is taking onboard the opinions of the users, or bring clarity about majority opinions. The subject could be posted across Upwork social media to get more users to weigh-in.
What do you think as Upwork Community leader, and as outsider Stan? 🙂 Thanks!
Sep 12, 2024 03:58:43 PM by Stan G
I'll answer it backwards. I am (in general) not a big fan of merging threads. We merge threads today, which often does cause threads to grow unusually large, and also can become a bit confusing when the conversation doesn't flow as one would think. This can also cause the threads to say active longer than necessary, with it constantly being pushed to the top. I won't say everyone is a fan of this, but I think we'll slowly move away from this type of model.
The hard part is deciding when a thread should be done. On one hand, we could start closing them and taking a more active role. On the other hand, we try to not force things like this on the Community (See others wondering why we take actions so often). This is the struggle, we are going to upset one side or the other, regardless of what we do. So we do our best to meet folks in the middle, allowing the Community as a whole to have as positive of conversations as possible, while also allowing for some debate and the more critical conversations to occur. Not to have this come off the wrong way, but you can always choose to mute a thread as well, so you won't need to see any notifications for it coming in. What may be a lot of discussion to you, may be the first time someone else is discovering it, and enjoys being able to jump into the conversation.
Sep 12, 2024 04:49:18 PM Edited Sep 16, 2024 11:24:13 AM by Ravindra B
On the one hand, merging can be helpful as other posts in that thread may shed some light.
On the other hand, merging can not only be disruptive but also drown the post.
Perhaps a post should be allowed to stay standalone for a few days before it is archived into a parent thread.
I wonder if it would be helpful to subdivide the forum into subforums.
For example, we could have these subforums:
1) JSS related
2) Connects and Instant Connection Fee
3) Harassment by client
4) Account related
5) VAT and other taxes
6) Scams
7) Feature requests
😎Rants
That way the forum would be structured, and as similar posts as grouped in a subforum, not only would merging not be essential but also a freelancer could browse though the experiences of other freelancers (and the resolution to problems).
Sep 12, 2024 05:48:26 PM by Stan G
For sub-forums - we've "almost" launched labels a half dozen times over the past 18 months, we've just never got it fully out the door. This would accomplish the same of what you are describing. If you check out the Community Blog or Videos, we utilize labels as a way to drill down to whatever content you want to see. We in theory can do the same here in the forums, when you / any user creates a topic, you would choose the predefined label(s) that best describe what you are posting about. You would then be able to filter down to the content. Let me check if we can just get it enabled (if only in one forum to test it).
Sep 12, 2024 07:16:19 PM by Ravindra B
I am talking about separate rooms, just as Upwork Community has the following rooms: Freelancers, New to Upwork, Clients, Coffee Break, Agencies, Support Forum etc.
Drilling down will help place the posts in the correct room.
People can just as easily go to the appropriate room and post there.
The posts should be organized by room.
Sep 13, 2024 02:45:48 PM Edited Sep 13, 2024 03:34:51 PM by Stan G
I enabled a really quick example of how Labels can filter down content in this Coffee Break forum.
When you create (or edit) a topic, you'll have the ability to add labels on the right side. You'll then be able to filter down to specific topics from a few different spots to see just the topics you want to see. Is this kind of what you are speaking of?
Sep 13, 2024 03:50:32 PM Edited Sep 13, 2024 03:50:59 PM by Ravindra B
No, Stan.
I am talking about subforums i.e. topics segregated by topics.
Just as we have the subforums Freelancers, Clients, etc.
Here is a practical example from the Techsupport forums:
Sep 13, 2024 05:15:06 PM by Stan G
I think we're we're thinking the same idea, it's just a different front end, but still the same amount of layers.
Hardware Support -> Overclocking -> Topic
is still as deep as
Coffee Break -> Fun Discussions -> Topic
As we launch labels (which could be called sub-forums), we could change the layout so each label is displayed the same as what is under "Hardware Support" in your example. We could also just launch additional forums, which we have long debated, but we've found it can be overwhelming for users new to Upwork or forums to know where to go when. Next year as we upgrade our platform, we'll likely take the opportunity to restructure some parts of the forums though.