Shane Jones’ Post

View profile for Shane Jones, graphic

AI Engineering Leader at Microsoft | Startup Advisor and Angel Investor

This morning, I sent a letter to the FTC and another letter to the Microsoft Board of Directors with my ongoing concerns about Copilot Designer and responsible AI. I am publishing these letters here because I believe in the core tenants of Microsoft's comprehensive approach to combating abusive AI-generated content as shared by Brad Smith last month. Specifically, we need robust collaboration across industry and with governments and civil society and to build public awareness and education on the risks of AI as well as the benefits. I stand committed to pursuing responsible AI with a growth mindset and being more transparent about AI risks so consumers can make their own informed decisions about AI use. I also want to thank all of my Microsoft colleagues that have publicly and privately supported my efforts to make AI safer. (note: letter to FTC is included as Attachment C in the letter to the Microsoft Board)

Charlie Pownall 查理·保诺

Founder, AIAAIC. RSA Fellow. Transparency advocate. Author. Former EU official, journalist, communication advisor

2mo

Hi Shane, well done, and good to know someone at Microsoft takes this stuff seriously. Out of interest, does Microsoft publish details of its AI incident reporting processes?

Nikolay Nikolaev

Member of Technical Staff at VMware

2mo

Shane, this sounds absurd, provide evidence not blank statements. 

Kate I.

Reporter at PCMag

3mo

Hi Shane, I want to ask you -- did you ever hear back from OpenAI regarding your reports to them?

Ian Krietzberg

Editor-in-Chief at The Deep View

3mo

Hi Shane, wondering if you've received a response from Microsoft since publishing the letter this morning.

Stephen Gregson

Computer Science student

3mo

What are the safety risks of DALL-E? What do you consider to be "harmful" images? From my experience using the free Bing Copilot, the images generated have been appropriate and accurate to what I actually wanted to see generated. The guardrails in place by OpenAI and Microsoft seem to be working well enough already IMHO. Google's Gemini has been publicly criticized recently for generating historically inaccurate images of people and biased output. From what I understand, it was configured to secretly insert words into the prompt on the back end without the user's knowledge, before submitting that modified prompt for image generation. They took things too far and it ironically produced more biased and hilariously inaccurate images. I disagree that AI-generated, CGI, or illustrated images depicting women in "racy" attire is necessarily objectifying to women. Who decides what is racy? That's incredibly subjective between cultures. Your training data for "car" was likely skewed because sports car promotions often employ attractive female models.

Patrick Russell

Training Manager for JFrog

2mo

I disagree with censoring an art tool because you don't like some of its outputs.  The tool has been designed to block outright nudity and violence, it does this well. I hate getting a prompt-blocked because I stumble across a banned tag, it's a testiment to the tool's guardrails.  We should not be limiting the scope of the tool when it's used by mature, responsible adults.   I'd like to point out users of Dall-e have to register a Microsoft account to use it. Just ban the people making lewd images if they upset you so much! 

Good for you. It is illegal to fire you, lay you off, discipline you, or retailaite against you in any way under the Washington State Silenced No More Act. However, I’d contact an employment attorney as you are most likely going to be fired under some fake reason. I applaud you and your bravery and transparency. In addition, under the whistleblower act, any illegal or misleading statements made by a public company reported by an employee pays you 10-30% of the SEC fine levied. The FTC doesn’t award whistleblowers but work hand in hand with the SEC as quarterly statements to investors are often tied in to misleading claims. Get a whistleblower attorney. They work on retainer. Most are in NYC or DC. The things you learn when you are made to take over a department the Monday after a certain person named Mudge is fired…

Noble Smith

Worldbuilding x-Xbox, 4x Macmillan published author, award-winning ex-playwright

3mo

Having worked at MS for almost 9 years I know what kind of professional risk you have taken by doing this, and I commend you for it. Here’s the link to today’s CNBC story about your efforts: https://www.cnbc.com/amp/2024/03/06/microsoft-ai-engineer-says-copilot-designer-creates-disturbing-images.html

Vipul Gupta

Managing 17-year old President of Global Kid Media - kid production / kid talent disruptor in media

3mo

Saw an article but not clear that if one is doing a search for words that will visually depict those words (i.e. party and weapon) then of course one would expect such images from the web to be pulled and shown by AI depicting that. There might be instances where a phrase or word could be misinterpreted but direct words that are associated with violence or aggressive activities seem to be just doing what one would expect and showing that. Trying to understand. PS - We have been advocating for AI to do experiments with business where it takes unique great content and multiplies revenue streams through faster speeds and reach and multi-faceted avenues to help entrepreneurs and business which is where AI can shine but not seeing a lot of that yet. Shane Jones

Chamil Mendis - MBA, I.S.P, PMP, PMI-ACP, SSM

Project Management Specialist | Agile Project Manager | Scrum Master | AI/GenAI Enthusiast | GenAI Influencer

2mo

𝐈 𝐡𝐚𝐯𝐞 𝐛𝐞𝐞𝐧 𝐮𝐬𝐢𝐧𝐠 𝐌𝐢𝐜𝐫𝐨𝐬𝐨𝐟𝐭 𝐂𝐨𝐩𝐢𝐥𝐨𝐭 𝐃𝐞𝐬𝐢𝐠𝐧𝐞𝐫 𝐡𝐞𝐚𝐯𝐢𝐥𝐲 𝐝𝐮𝐫𝐢𝐧𝐠 𝐭𝐡𝐞 𝐩𝐚𝐬𝐭 𝟔 𝐦𝐨𝐧𝐭𝐡𝐬 𝐡𝐨𝐰𝐞𝐯𝐞𝐫, 𝐈 𝐡𝐚𝐯𝐞 𝐧𝐨𝐭 𝐞𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞𝐝 𝐬𝐮𝐜𝐡 𝐚𝐧 𝐢𝐬𝐬𝐮𝐞 𝐭𝐢𝐥𝐥 𝐧𝐨𝐰 𝐚𝐧𝐝 𝐌𝐒 𝐂𝐨𝐩𝐢𝐥𝐨𝐭 𝐡𝐚𝐬 𝐯𝐞𝐫𝐲 𝐠𝐨𝐨𝐝 𝐩𝐫𝐨𝐦𝐩𝐭 𝐯𝐚𝐥𝐢𝐝𝐚𝐭𝐢𝐨𝐧𝐬 𝐚𝐠𝐚𝐢𝐧𝐬𝐭 𝐭𝐡𝐞𝐢𝐫 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐩𝐨𝐥𝐢𝐜𝐲 𝐜𝐨𝐦𝐩𝐚𝐫𝐞𝐝 𝐭𝐨 𝐬𝐨𝐦𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐨𝐭𝐡𝐞𝐫 𝐀𝐈 𝐦𝐨𝐝𝐞𝐥𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐦𝐚𝐫𝐤𝐞𝐭.

See more comments

To view or add a comment, sign in

Explore topics