Researchers in South Korea developed a urine test for diagnosing prostate cancer that relies on machine-learning algorithms. The technique, which reportedly works in 20 minutes, could be used to diagnose other cancers using only urine samples, according to research published in the journal ACS Nano.
More:
- According to the researchers, prostate screenings typically rely on a prostate-specific antigen test using blood, which has a high rate of false positives. This often leads to unnecessary biopsies, they said.
- The scientists set out to develop a non-invasive test, which uses an electrical-signal-based ultrasensitive biosensor to detect four cancer factors in the urine. An accompanying AI system was trained to analyze and identify complex patterns in those signals.
- The two machine-learning algorithms showed better performance when the number of biomarkers was increased. In 76 tests, it reached over 99% accuracy.
- The study comes from Dr. Kwan Hyi Lee and Professor In Gab Jeong at the Korea Institute of Science and Technology.
National Research Council of Science & Technology
|
|
European Parliament members adopted a new AI report yesterday that calls for a ban on lethal autonomous weapons, as well as “highly intrusive" AI-based social scoring applications. AI programs can't replace human contact or decision-making, according to the report, which provides guidelines for both military and non-military AI uses.
More:
- AI should remain under human control, so people can correct or disable it if it goes awry, the report says. It also calls for bans on anthropomorphizing lethal autonomous weapons so people don't confuse humans with robots.
- The report asks governments to outlaw social scoring apps that rate and monitor citizens. It expresses concerns about AI-edited deepfakes, which it says have the potential to “destabilize countries, spread disinformation, and influence elections.”
- “Faced with the multiple challenges posed by the development of AI, we need legal responses,” said Parliament Member Gilles Lebreton, who wrote the 18-page report.
- The European Parliament adopted the report yesterday with 364 votes in favor, 274 against, and 52 abstentions. It calls for a legal framework on AI, including ethical principles and definitions.
SILICON REPUBLIC
|
|
The Alphabet Workers Union has expressed concerns about Google's decision to suspend the corporate account of one of its members, Margaret Mitchell, who leads Google's Ethical AI team. Mitchell was locked out by Google after she downloaded materials linked to the dismissal of her colleague, AI researcher Timnit Gebru.
More:
- The union said both actions represent "an attack on the people who are trying to make Google’s technology more ethical.”
- The organization accused Alphabet, Google's parent, of frequently pushing against the limits of employees' legal protections when they're terminated.
- Google has said it's investigating the claims that Mitchell used automated scripts to find messages confirming Gebru's mistreatment at the company. The union accused Google of attacking Mitchell and trying to "tarnish her reputation by making claims that they’re allegedly still investigating."
- Gebru left the company after leaders asked her to retract her name from a research report highlighting AI biases and other problems. After Gebru's departure in December, Alphabet CEO Sundar Pichai apologized and pledged to launch a review into what went wrong.
AFP
|
|
An analysis of facial recognition algorithms tested on real-world images showed higher false-positive rates for females with darker skin tone and people wearing glasses. False positives occur when a facial algorithm incorrectly matches someone's image to another in a database.
More:
- The analysis comes from researchers from the Human Pose Recovery and Behavior Analysis Group at the Computer Vision Center (CVC), University of Barcelona, which they created as a challenge at last year's European Conference of Computer Vision.
- Overall, 10 final teams that submitted algorithms exceeded 99.9% accuracy. They also had low scores in proposed metrics measuring bias, the researchers said.
- An analysis of those teams still showed higher false-positive rates for women with dark skin tones and when individuals in both images were wearing glasses. There were higher false-negative rates for males with lighter skin tone and in samples where both were 35 and younger. [False negatives occur when the systems fail to make a match that does, indeed, exist.]
- The results could "be considered a step toward the development of fairer face recognition methods," said CVC researcher Julio C. S. Jacques Jr. He still noted that "overall accuracy is not enough" in the goal to develop fair facial recognition methods, so "future works on the topic must take into account accuracy and bias mitigation together."
Related:
- The 2018 “Gender Shades” project found that three gender classification algorithms performed the worst on darker-skinned females. A National Institute of Standards and Technology (NIST) analysis confirmed this, showing that face recognition systems in nearly 190 algorithms were the least accurate on women of color. Read more here.
TECHXPLORE
|
|
CNBC interviewed AI researchers about their picks for the world's top AI labs. While a controversial question, some researchers pointed to Alphabet's DeepMind, Elon Musk-founded OpenAI, and Facebook Artifical Intelligence Research, or FAIR, as among the highest ranked.
More:
- “Reputationally, there is a good argument to say DeepMind, OpenAI, and FAIR are the top three,” Mark Riedl, associate professor at the Georgia Tech School of Interactive Computing, told CNBC.
- Google Brain and Microsoft could also be included, said AI investor Nathan Benaich, a partner at Air Street Capital.
- While DeepMind, OpenAI, and FAIR likely have the most known funding, IBM has more patents, according to one AI expert who asked to remain anonymous. He noted that Chinese tech giants, like Baidu and Tencent, are unknown. Their funding and other information aren't disclosed.
- Another way to measure clout is by the number of academic papers published at top AI conferences, like NeurIPS. Google had 178 papers published at NeurIPS, followed by Microsoft at 95, DeepMind at 59, Facebook at 58, and IBM at 38, CNBC noted.
CNBC
|
|
Springbox AI, an application that uses algorithms in financial forecasting, is now available on iOS and Android. The beta app, which costs $49 a month, says it offers market forecasting through predictive analysis, live market stocks screening, intelligent trading news, and other services.
More:
- According to TechCrunch, it's "designed to replace financial market investment service" and is "aimed at the average financial markets trader."
- Its developers claim that it uses advanced algorithms "to identify high market patterns profitability," bringing AI tech used at an institutional level to other investors.
- Springbox AI has raised at least $2M from private European investors.
FORBES
|
|
The way in which the new Biden administration, Congress, and the federal judiciary handle rules regarding private companies' access to personal data will have significant consequences for American society, argue Westminster College academics Blaine Ravert and Tobias Gibson. Collection of metadata by the private sector and use of artificial intelligence will continue to erode privacy rights if nothing is done to limit its impact, Ravert and Gibson argue.
More:
- Courts have developed a legal standard known as the third-party doctrine, which determines how much personal data governments can access. The doctrine states that a person does not have a "reasonable expectation of privacy" around personal data provided to third parties, such as phone companies, apps, and internet providers, when it comes to government access.
- However, there is not a similar doctrine for personal data access by private companies such as Twitter, Facebook, and Google. This is why there is a pressing need for new data privacy rules for the private sector, they conclude.
- Without constitutional protections, the right to privacy would cease to exist, argues Laura Donohue, Georgetown University professor.
A version of this story first appeared in privacyXFN, our newsletter covering in-depth cybersecurity news and analysis.
|
|
QUICK HITS:
- How are companies deciding on privacy management solutions in 2021? This eGuide breaks it down.*
- Voice transcription service Otter.ai is now available for Google Meet via a Chrome web browser extension. The service already provides live transcripts of Zoom calls.
- Uniphore has acquired Emotion Research Lab, a software developer that applies AI/ML to videos to help identify emotions and engagement levels.
- Sony's new lip-reading technology, which uses cameras and AI to read lips at a distance, could raise data privacy concerns.
- A U.S. federal appeals court has sent a lawsuit against Clearview AI back to state court because it was brought under Illinois' Biometric Information Privacy Act.
- VASynth, a new app for modders, uses AI models to convert text into synthesized speech.
- In a new blog post, Microsoft's Chief Responsible AI Officer Natasha Crampton explains the building blocks behind the company's efforts to make sure its AI programs reflect its principles.
- Nexo manages $4B in assets and has over 1M users. See why fintech consumers are banking on crypto.*
*This is a sponsored post.
|
|
|
|
Beth is a former investigative reporter for The Arizona Republic who authored a book about the U.S. solar industry. A graduate of the Walter Cronkite School of Journalism, she won a First Amendment Award and a Pulitzer Prize nomination for her co-reporting on the rising costs of Arizona's taxpayer-funded pension systems.
|
|
Editor
|
Charlotte Hayes-Clemens is an editor and writer based in Vancouver. She has dabbled in both the fiction and non-fiction world, having worked at HarperCollins Publishers and more recently as a writing coach for new and self-published authors. Proper semi-colon usage is her hill to die on.
|
|