Hello readers!
This week, we relaunched Inside Retail as Inside E-commerce. Check out the first issue, a deep dive into Canadian e-commerce giant Shopify. If you like what you see, head over to inside.com/ecommerce to subscribe!
|
|
|
The U.S. government has announced a $1b investment in AI and quantum computing hubs. Of that, $140m will be invested in seven AI initiatives (see story below).
- The Trump administration released an executive order last year outlining policies and plans to maintain American leadership in AI, without mentioning the investment amount.
- In Feb. 2020, the White House said $2b would be invested in non-defense AI by 2022.
- Recent investments by governments in AI:
- EU - $1.69b
- France - $1.69b
- South Korea - $1.95b
- China outspends the U.S. in AI, except in defense. Venture Beat reported that the chances are low for the U.S. to maintain its leadership position given the number of investments made by other countries.

This story first appeared in today's Inside Business newsletter.
|
|
The National Science Foundation and other federal partners will award $140m over five years to seven AI Research Institutes. Announced today, the money will go toward AI R&D in areas like forecasting prediction, precision agriculture, workforce development, and AI in the classroom. Each institute will receive $20m in funding:
- NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography, led by a University of Oklahoma team. The institute will offer AI certificate programs and hire researchers to work on trustworthy AI for weather, climate, and coastal hazards.
- NSF AI Institute for Foundations of Machine Learning, led by the University of Texas. The institute will work with Austin, Texas, and industrial tech firms to explore neural architecture optimization and other theoretical AI issues, plus offer more students tools and online coursework.
- NSF AI Institute for Student-AI Teaming, led by the University of Colorado, Boulder. This team will develop AI related to speech, gestures, gaze, and facial expressions to help students and teachers in remote and real-world classrooms.
- NSF AI Institute for Molecular Discovery, Synthetic Strategy, and Manufacturing, led by the University of Illinois at Urbana-Champaign. Researchers will work on AI tools to accelerate chemical synthesis, and potentially discover and produce new materials and compounds.
- NSF AI Institute for Artificial Intelligence and Fundamental Interactions, led by the Massachusetts Institute of Technology. The group will incorporate digital learning, outreach, and workforce development to develop AI that integrates physics.
- USDA-NIFA AI Institute for Next Generation Food Systems, led by the University of California, Davis. The institute will use AI and bioinformatics to better understand biological data and processes and address challenges like crop quality, agricultural production, and pest and disease resistance.
- USDA-NIFA AI Institute for Future Agricultural Resilience, Management, and Sustainability, led by a second team at the University of Illinois at Urbana-Champaign. The institute will offer a joint computer science and agriculture degree and work on AI research in ML, computer vision, soft object manipulation, and human-robot interactions to address labor shortages and other agricultural challenges.
Related:
- The NSF said it plans to award more than $300m overall in the coming years, including partner contributions, as part of its AI Research Institute awards.
- The NSF invests over $500m in AI each year.
WHITEHOUSE.GOV
|
|
Gartner added new AI technologies to its report about emerging technologies. There are 30 technologies that Gartner says will significantly change society and business over the next 5-10 years. We highlight them below in today's special feature.
Embedded AI: This involves using AI/ML in embedded systems to analyze data that is locally captured, like a coffee machine that can learn what time you want a cup. One huge application will be in manufacturing, whose machinery and hardware systems can especially benefit from this technology via predictive maintenance, which leads to fewer breakdowns and losses.
Generative AI: This involves ML techniques that change content to create novel content, with some original likeness. Examples include deepfakes, which were rated as the most concerning AI crime in a new UCL report. But the realistic (and fake) AI-generated videos can also help with data generation and drug discovery. Oxbotica, an autonomous vehicles software company, uses deepfakes to train its self-driving systems without real-world testing.
Responsible AI: This AI field encourages businesses and others to increase the transparency of their AI systems and reduce bias. Finding harmful deepfakes and focusing on AI for good are examples. U.K. lawmakers also introduced a regulation that forces businesses to explain their AI decisions or face fines...
|
|
Elon Musk's brain chip startup Neuralink will demonstrate a working device, presumably a brain-machine interface, at 3 p.m. PST this Friday. Musk says the implantable devices would keep humans apace with advanced AI, which he views as an existential threat. The first BMI iteration would involve the surgical implantation of electrodes in the brain, allowing people to control computers and smartphones with their minds.
More:
- The Tesla CEO has said that a robot would implant a chip in people's brains as efficiently and quickly as laser eye surgery. This is "still far" off, but "could get pretty close in a few years," he tweeted.
- Friday's demonstration will show "neurons firing in real-time," Musk added, like "the matrix in the matrix."
- Musk founded Neuralink in 2016 to merge AI with the human brain. It's been tested on animals, with trials set to begin sometime this year.
- From Twitter: Musk's announcement Tuesday drew a bevy of responses. Actress Kat Dennings asked, "Will this fix psychopaths?" Forbes' David Gokhshtein commented: "Yeah, I don’t know about this."
THE VERGE
|
|
YouTube removed 11.4m videos last quarter, the vast majority of which were flagged by automated systems. Like Facebook, the video platform has turned more to AI reviewers to flag and remove harmful, dangerous, and false content during the pandemic. From April to June, it removed more than double the videos than in the period from January to March.
More:
- YouTube says its automated software removed 95% of problematic videos at first detection. Overall, 10.85m were flagged by automated systems, according to YouTube‘s latest Community Guidelines Enforcement Report.
- The company has "greatly reduced human review capacity" during the pandemic to ensure videos comply with its policies. It made the decision to over-enforce in areas like violent extremism or content that's harmful to children.
- Normally, people assess the AI-flagged videos. In this case, YouTube skipped much of that step and hired more staffers to oversee appeals. Less than 3% of all the removals resulted in an appeal.
- AI also detected 99.2% of the 2.1m YouTube comments that were taken down.
BBC NEWS
|
|
QUICK HITS
- Octane AI, a buyer profile, Facebook Messenger, and SMS marketing platform for Shopify, raised $4.25m.
- Data scientist Ethan Rosenthal used computer vision and machine learning to create the optimal peanut butter and banana sandwich.
- Delight your remote team with curated gift boxes by SnackNation. Get $10 off per box until 8/27!*
PS: We are looking for Business Researchers for our Toronto office. Join us!
*This is a sponsored post.
|
|
|
|
|
|
Beth is a tech writer and former investigative reporter for The Arizona Republic. A graduate of the Walter Cronkite School of Journalism, she won a First Amendment Award and a Pulitzer Prize nomination for reporting on the rising costs of public pensions.
|
|
Editor
|
|
|