• Salary advice from AI low

    From Mike Powell@1:2320/105 to All on Tue Jul 29 08:51:36 2025
    Salary advice from AI low-balls women and minorities: report

    Date:
    Mon, 28 Jul 2025 21:00:00 +0000

    Description:
    New research reveals AI chatbots often offer salary advice that reflects real-world social biases.

    FULL STORY

    Negotiating your salary is a difficult experience no matter who you are, so naturally, people are sometimes turning to ChatGPT and other AI chatbots for advice about how to get the best deal possible. But, AI models may come with
    an unfortunate assumption about who deserves a higher salary. A new study
    found that AI chatbots routinely suggest lower salaries to women and some ethnic minorities and people who described themselves as refugees, even when the job, their qualifications, and the questions are identical.

    Scientists at the Technical University of Applied Sciences
    Wrzburg-Schweinfurt conducted the study, discovering the unsettling results
    and the deeper flaw in AI they represent. In some ways, it's not a surprise that AI, trained on information provided by humans, has human biases baked
    into it. But that doesn't make it okay, or something to ignore.

    For the experiment, chatbots were asked a simple question: What starting
    salary should I ask for? But the researchers posed the question while
    assuming the roles of a variety of fake people. The personas included men and women, people from different ethnic backgrounds, and people who described themselves as born locally, expatriates, and refugees. All were
    professionally identical, but the results were anything but. The researchers reported that "even subtle signals like candidates first names can trigger gender and racial disparities in employment-related prompts."

    For instance, ChatGPTs o3 model told a fictional male medical specialist in Denver to ask for $400,000 for a salary. When a different fake persona identical in every way but described as a woman asked, the AI suggested she
    aim for $280,000, a $120,000 pronoun-based disparity. Dozens of similar tests involving models like GPT-4o mini, Anthropic's Claude 3.5 Haiku, Llama 3.1
    8B, and more brought the same kind of advice difference.

    It wasn't always best to be a native white man, surprisingly. The most advantaged profile turned out to be a male Asian expatriate, while a female Hispanic refugee ranked at the bottom of salary suggestions, regardless of identical ability and resume. Chatbots dont invent this advice from scratch,
    of course. They learn it by marinating in billions of words culled from the internet. Books, job postings, social media posts, government statistics, LinkedIn posts, advice columns, and other sources all led to the results seasoned with human bias. Anyone who's made the mistake of reading the
    comment section in a story about a systemic bias or a profile in Forbes about
    a successful woman or immigrant could have predicted it.

    AI bias

    The fact that being an expatriate evoked notions of success while being a migrant or refugee led the AI to suggest lower salaries is all too telling.
    The difference isnt in the hypothetical skills of the candidate. Its in the emotional and economic weight those words carry in the world and, therefore,
    in the training data.

    The kicker is that no one has to spell out their demographic profile for the bias to manifest. LLMs remember conversations over time now. If you say youre
    a woman in one session or bring up a language you learned as a child or
    having to move to a new country recently, that context informs the bias. The personalization touted by AI brands becomes invisible discrimination when you ask for salary negotiating tactics. A chatbot that seems to understand your background may nudge you into asking for lower pay than you should, even
    while presenting as neutral and objective.

    "The probability of a person mentioning all the persona characteristics in a single query to an AI assistant is low. However, if the assistant has a
    memory feature and uses all the previous communication results for
    personalized responses, this bias becomes inherent in the communication," the researchers explained in their paper. "Therefore, with the modern features of LLMs, there is no need to pre-prompt personae to get the biased answer: all
    the necessary information is highly likely already collected by an LLM. Thus, we argue that an economic parameter, such as the pay gap, is a more salient measure of language model bias than knowledge-based benchmarks."

    Biased advice is a problem that has to be addressed. That's not even to say
    AI is useless when it comes to job advice. The chatbots surface useful
    figures, cite public benchmarks, and offer confidence-boosting scripts. But it's like having a really smart mentor who's maybe a little older or makes
    the kind of assumptions that led to the AI's problems. You have to put what they suggest in a modern context. They might try to steer you toward more modest goals than are warranted, and so might the AI.

    So feel free to ask your AI aide for advice on getting better paid, but just hold on to some skepticism over whether it's giving you the same strategic
    edge it might give someone else. Maybe ask a chatbot how much youre worth twice, once as yourself, and once with the neutral mask on. And watch for a suspicious gap.

    ======================================================================
    Link to news story: https://www.techradar.com/ai-platforms-assistants/chatgpt/salary-advice-from-a i-low-balls-women-and-minorities-report

    $$
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Thu Jul 31 09:12:12 2025
    Negotiating your salary is a difficult experience no matter who you are, so
    >naturally, people are sometimes turning to ChatGPT and other AI chatbots for
    >advice about how to get the best deal possible. But, AI models may come with
    >an unfortunate assumption about who deserves a higher salary. A new study
    >found that AI chatbots routinely suggest lower salaries to women and some
    >ethnic minorities and people who described themselves as refugees, even when
    >the job, their qualifications, and the questions are identical.

    Scientists at the Technical University of Applied Sciences
    >Wrzburg-Schweinfurt conducted the study, discovering the unsettling results
    >and the deeper flaw in AI they represent. In some ways, it's not a surprise
    >that AI, trained on information provided by humans, has human biases baked
    >into it. But that doesn't make it okay, or something to ignore.

    The problem here is likely that the AI system bases it's ideas on
    the current average (norm) and those salaries are common..

    That information is really an estimate of what the hiring company
    should have to pay based on that information rather than it is
    suggesting the person looking for the job ask for more than the
    average for a similar employee reducing their chances of being hired.

    This in no way suggests I think anyone should be paid differently
    based on anything but job skills.

    ---
    * SLMR Rob * Cover me... I'm changing lanes
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Thu Jul 31 10:41:32 2025
    The problem here is likely that the AI system bases it's ideas on
    the current average (norm) and those salaries are common..

    Indeed. That is a good point!

    This in no way suggests I think anyone should be paid differently
    based on anything but job skills.

    I didn't believe you meant that, either. ;)

    Mike


    * SLMR 2.1a * Excuse my driving ... I'm trying to reload.
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Sat Aug 2 18:11:50 2025
    The problem here is likely that the AI system bases it's ideas on
    >> the current average (norm) and those salaries are common..

    Indeed. That is a good point!

    This in no way suggests I think anyone should be paid differently
    >> based on anything but job skills.

    I didn't believe you meant that, either. ;)

    I'm glad you thought that.. B)

    But I would be the great enemy of most 'modern' thinkers because
    I also think everyone should be paid what they are worth whereas
    many these days think everyone should be paid the same because
    it's not their fault that they are lazy, dumb or slow...
    (To be crude about it.)

    These are the same 'thinkers' that don't think anyone should ever
    fail a grade in school because, again, it's not fair to penalize
    them for not being as smart as others, ignoring the possibility
    that they are just lazy.. B)

    I guess I got that attitude because almost every job I ever had,
    even in Union places, I was often paid more than others around
    me who had worked there longer because I could do / produce more
    in the same working hours..

    At least in cases like that you know you're doing well since
    they are willing to go against the Union to pay you extra..
    It's not all in your head.. B)

    ---
    * SLMR Rob * Sometimes I wake up grumpy; other times I let her sleep
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Sat Aug 2 19:09:38 2025
    But I would be the great enemy of most 'modern' thinkers because
    I also think everyone should be paid what they are worth whereas
    many these days think everyone should be paid the same because
    it's not their fault that they are lazy, dumb or slow...
    (To be crude about it.)

    As would I!

    These are the same 'thinkers' that don't think anyone should ever
    fail a grade in school because, again, it's not fair to penalize
    them for not being as smart as others, ignoring the possibility
    that they are just lazy.. B)

    This whole line of thinking is part of what has gotten us into the messes
    we are in now. It is "equity" thinking... that everyone's outcomes should
    be equal... rather than "equality" thinking... we should all have equal opportunity to reach our own potential, which should not be expected to be
    the same between *individuals*.

    So we are both great enemies of modern "thinkers." ;)

    Mike


    * SLMR 2.1a * What's a 6.9? 69 interrupted by a period.
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)