Weather     Live Markets

Summarize this content to 2000 words in 6 paragraphs

Ryan Sloan.

Editor’s note: Ryan Sloan, a data scientist based in Seattle, wrote this guest post after assessing the GeekWire 200, our list of top Pacific Northwest startups.

I recently read that a company in Finland is using AI to find the “perfect coffee blend.” And here I am buying an imperfect blend of beans from a local coffee roaster like a sucker.

There’s little question that AI is everywhere. I have a wide network of product managers and data scientists, and the vast majority of them are working on an AI integration or product of some sort. Companies talk about AI with enthusiasm — who doesn’t want “perfect” coffee?

The market’s roaring enthusiasm for AI technology doesn’t transfer to individuals, though. Gallup’s 2024 survey found that only 13% of Americans believe AI does more good than harm. On top of that, 77% don’t trust businesses to use AI responsibly.

I’m a data scientist in Seattle, so I wondered: how are local companies approaching the AI trend? Is it really everywhere in the startup bubble, or am I in an even smaller AI bubble? Is the hype out of control? Are they publicly committed to responsibility? (Spoiler: You’re not going to like all the answers).

I took a deep dive into the public-facing content of some of the fastest-growing startups in the Pacific Northwest to analyze their AI-related language.

Data and methods

The GeekWire 200 is a ranked list of 200 fast-growing startups in the Pacific Northwest. I built a crawler to scan the public-facing websites of the companies on that list, and extract their text. After filtering out sites that prohibit automated crawling, I was left with 187 startups. I pulled down the first two “layers” of content from their websites (the homepage, and all pages linked from it), and extracted user-facing text to analyze common phrases.

After an initial analysis of frequent phrases, I built three lists:

Markers for AI. This includes things like “AI,” “artificial intelligence,” and “Generative Models.”

Markers for hyperbole. This includes things like “visionary,” “bleeding-edge,” “revolutionary,” and “perfect.”

Markers for AI responsibility. Things like “Responsible AI,” “bias mitigation,” and “AI ethics.”

The way these three lists overlap (or don’t) is revealing. Onto the findings!

The first major difference is clear: 73% of B2B companies are pitching AI, but less than half of consumer and R&D startups are. B2B startups have focused these pitches on efficiency. When they talk about AI they use words like “faster” and “maximize engagement.” Consumer companies focus on terms like “conversation” and “personalization,” emphasizing the interactivity enabled by generative AI.

I don’t go to company websites expecting humility, but the language associated with AI often goes a few steps beyond optimism. I looked at the overlap of hyperbolic language and text about AI. Pre-market R&D companies use the most of it (and that might be where it’s most warranted), but this language isn’t reserved for the lab. B2B and B2C companies alike are on the “cutting-edge,” building “revolutionary” tech. Speaking just for myself, this language almost always rings hollow, whether it’s about a spreadsheet or the “perfect” AI-generated coffee blend. 🥱

Gallup’s findings revealed that people don’t have high hopes that companies will use AI responsibly. But there’s a light at the end of the anxiety-tunnel: 57% of respondents reported that their concerns would be reduced if businesses were transparent about how AI is used. I looked at how companies discuss their commitments and actions to responsible AI.

The current state of the world won’t inspire hope in those 57% of Americans. Only 19% of the companies talking about AI shared anything at all about their commitments to responsible AI, model evaluation, or bias mitigation. I didn’t expect a majority, but I was a little taken aback. There wasn’t a high bar for inclusion here: merely stating “StartupCo believes in responsible AI” would’ve counted. When the vast majority of companies aren’t even paying lip-service to responsibility, is it any wonder that people don’t trust them?

It doesn’t paint a rosy picture, but there are bright spots. Some PNW startups doing this well. Responsive AI’s Committment to Ethical AI and Humanly’s Ethical AI Manifesto outline the design principles and frameworks used to develop their product. Textio’s Building Responsibly page describes some of the procedural and statistical methods used to evaluate models and mitigate bias. If you’re a team building an AI product and you’re not sure where to start with responsible AI, you can look to your peers working on this already.

If you’re ready to roll up your sleeves, Data Statements provide a clear place to start. This form of documentation was originally proposed by Emily M. Bender and Batya Friedman in a 2018 paper at the University of Washington. As Gallup reported, transparency is a key first step to winning consumer trust. But documentation about the provenance, characteristics, and handlers of the data provide benefits that go beyond building customer confidence. Every dataset is biased, whether you’ve quantified it or not. Data statements like these also provide the builders with a framework for identifying gaps and risks. They’re fertile soil for growing your responsible AI efforts.

Faced with overstocked aisles of AI products, consumers will vote with their wallets. When the hype fades, unease about responsible AI may remain. Why should they trust your company? Have you earned that trust? What do you owe to your customers and neighbors, and how can you get started today?

Share.
Exit mobile version