Dear readers,
Welcome to Monday's Inside AI!
We'd like to offer a quick reminder about the benefits of signing up for Premium, which provides access to all Inside AI issues ad-free, along with these recent regularly published features:
- AI Masterclasses featuring expert thought leaders, such as this special feature on the seven ways AI improves business processes.
- Our Podcast Notes summarizing recent podcasts in the AI sector, such as this interview with Jason Matheny of the U.S. National Security Commission on Artificial Intelligence.
- Our deep dives into the latest reports on AI, including the new AI technologies that made it on Gartner's list of emerging technologies.
- Weekly updates on the AI/ML/robotics startups that raised venture capital funding.
...and more.
If you want to start reading this content today, we're offering a 14-day FREE trial. To take advantage of this deal while it still lasts, upgrade to premium!
|
|
|
Twitter says it will look into its auto-cropping algorithm after it showed racial bias against black faces. Twitter uses a neural network to automatically crop photos so they don't take up as much space on a user's feeds, but it appears to focus more on white faces than black faces.
More:
- The issue was highlighted in cryptographic engineer Tony Arcieri's recent experiment, where he demonstrated how the algorithm removed Barack Obama's face in a photo preview that users see as they scroll through feeds.
- The original photo featured both Obama and Mitch McConnell. In this case, the AI honed in on McConnell's face and removed Obama's in the preview.
- Obama's photo apparently did show when Arcieri inverted the colors. Intertheory’s Kim Sherrell also found that a higher-contrast smile was effective.
- A similar scenario happened with “The Simpsons” characters Lenny and Carl. In the preview, Twitter's algorithm removed Carl, who is black, and only showed Lenny, who is white.
- Twitter’s chief design officer Dantley Davis said it's now "on the team's mind" to stop cropping photos. The model was initially analyzed but requires "continuous improvement," Twitter CTO Parag Agrawal noted.
- Twitter spokeswoman Liz Kelley said the company tested the model for bias before using it, but it's become clear that more analysis is needed

PETA PIXEL
|
|
Elon Musk says Tesla plans to publicly open up its Dojo supercomputer as a web service to train machine learning models. Tesla continues to develop the AI neural computer network, which would train its vehicles' Full Self Driving AI.
More:
- The supercomputer would be optimized for training computer vision AI on videos. Musk said the computing power would be an exaFLOP, which is 1,000 petaFLOPS.
- Musk confirmed the plans open the supercomputer to the public in a tweet yesterday. He also noted that Dojo uses Tesla chips and a computer architecture optimized for neural net training rather than a GPU cluster. "Could be wrong, but I think it will be [the] best in [the] world," he said.
- Dojo translates to “place of the way” in Japanese. Last month, Tesla said it would hire software engineers and developers to work on Dojo, which are based in Palo Alto, Austin, and Seattle, although "exceptional" applicants could work remotely, according to Musk. The supercomputer project could be operational as early as next year.
- Related news: Tesla is reportedly close to launching its “Full Self-Driving” subscription.

ELECTREK
|
|
New funding for AI/ML startups
- RVBUST a Chinese company that builds technology in robotics and computer vision, raised a $10M Series A from Gaorong Capital.
- Totient, a Cambridge, Mass. AI-based drug discovery platform, raised $10M from Mission BioCapital, Sands Capital, Viva Biotech, Kaitai Capital, Tau Ventures, Jonathan Milner, et al.
- Sentinel, a detection platform for identifying deep fakes, raised $1.35M from Jaan Tallinn, Taavet Hinrikus, Ragnar Sass & Martin Henk, and United Angels VC...
To continue reading the full list of last week's funding rounds (and receive a new update every week) upgrade to Inside AI Premium for either $10/month or $100 billed annually! For a limited time, we are offering a 14-day free trial of premium. Click here to sign up!
Subscribe to Inside AI Premium
|
|
|
|
YouTube says it's reinstating human moderators after its AI systems erroneously removed too much content during the pandemic. YouTube removed 11.4m videos last quarter, the vast majority of which were flagged by automated systems.
More:
- Like Facebook, the video platform has turned more to AI reviewers to flag and remove harmful, dangerous, and false content during the pandemic. From April to June, it removed more than double the videos than in the period from January to March. It was the highest taken down in a three-month period since YouTube launched publicly in 2005.
- YouTube’s chief product officer has since admitted that the machines aren't as precise as human moderators. When the pandemic first started, the company decided to err on the side of protecting users, even though it appears to result in a "slightly higher number of videos coming down."
- With AI moderators, the percentage of successful appeals has reached 50%, up from the normal 25%. YouTube has since reversed the removal of 160,000 videos.
- YouTube says its automated software removed 95% of problematic videos at first detection. Overall, 10.85m were flagged by automated systems last quarter, according to YouTube‘s latest Community Guidelines Enforcement Report.
FINANCIAL TIMES
|
|
Peking University and Shanghai Jiao Tong University researchers created an AI system that calculates a person's biological age based on a 3D image. The technology purports to show if a person appears younger or older than their actual age.
More:
- The research shows that middle age is the ideal time for "aging interventions." For example, they showed that yogurt, fruit, chicken, beans, and eating on time could decelerate aging. By contrast, smoking, alcohol, pickled foods, UV exposure, and antibiotics make people appear older than they really are.
- They built the AI system from a database of 5,000 people's 3D facial features and health info. The AI algorithm, a deep convolutional neural network, could be used as a health estimator.
- In related news, Hong Kong-based Deep Longevity created an app, Young.ai, that can predict biological age as well as the rate at which a person is aging and recommend personalized "interventions" based on diet and exercise. The app, based on “biological clock” algorithms, will debut Sept. 29 on the Apple App Store.
SOUTH CHINA MORNING POST
|
|
“A Short History of Plungers and Other Things That Go Plunge in the Night"
MIT-trained roboticist and computational artist Alexander Reben used OpenAI’s GPT-3 to generate ideas and descriptions of original artworks. Reben’s AI Am I? (The New Aesthetic) exhibition features artworks dreamed up by the language-predicting deep learning model, which he fed with various "start texts" and refined the outputs over time.
More:
- Reben says GPT-3 came up with all the artwork descriptions, backgrounds, artist statements, and even the exhibition's title. He and others helped turn the various (and realistic-sounding) outputs into real-life artworks.
- An example is the work “A Short History of Plungers and Other Things That Go Plunge in the Night." GPT-3 generated a description of the piece, which reads that the sculpture "contains a plunger, a toilet plunger, a plunger, a plunger, a plunger, a plunger, each of which has been modified.”
- GPT-3 also described how a collective of anonymous artists from 1972, focused on an imaginary art form called "Plungism," created the landmark conceptual art, which it says was "featured on an episode of Seinfeld in 1997.”
- Reben says the goal was to show that machine-generated art is "not a lesser form of art." Humans can provide a "seed of inspiration," which the AI interprets to generate artworks and text.
- You can view the online-only exhibition here.
FORBES
|
|
QUICK HITS
- A Reddit user analyzed the costs of GPT-3, which they say amounts to $10M+ to develop and $10K+/month to run.
- Amazon hasn't said what it will reveal at its fall Alexa hardware event, which takes place at 1 p.m. ET Sept. 24. Last year it unveiled its devices such as its third-gen Echo speaker, Echo Studio speaker, Echo Show 8 smart display, and Echo Frames glasses.
- The Los Angeles Police Department has used facial-recognition technology ~30,000 times since 2009.
- Tech company Abaka launched a conversational AI chatbot for financial firms.
- Adobe previewed its Sensei-powered AI Sky Replacement tool for Photoshop, which will enhance background skies in photos.
- Venture capitalist Edith Yeung says TikTok's machine learning algorithm is its "crown jewel."
- HEC Paris and the Institut Polytechnique de Paris opened the Center on Artificial Intelligence and Data Analytics for Science, Business and Society.
- Brands see 18.5% of e-commerce revenue from SMS marketing. See 6 top SMS campaigns here.*
*This is a sponsored link.
|
|
|
|
Beth Duckett is a former news and investigative reporter for The Arizona Republic, who has written for USA Today, American Art Collector, and other publications. A graduate of the Walter Cronkite School of Journalism, she won a First Amendment Award and a Pulitzer Prize nomination for her original reporting on problems within Arizona's pension systems.
|
|
Editor
|
Sheena Vasani is a journalist and UC Berkeley, Dev Bootcamp, and Thinkful alumna who writes Inside Dev and Inside NoCode.
|
|