AI News Weekly by CogniVis #25


Hello Reader,

In this issue of AI News Weekly by Keep Coding:

A guide to implementing AI in your business (a practical one)

AI news are exciting & we get more of them every day, but if you want to leverage AI in your business you need to take a deeper dive into some practical usage examples. We prepared a FREE step by step guide for AI transformation that you can instantly implement in your company.

Read the FULL guide for FREE here

Revolution in AI: Harnessing Human Neurons for Biocomputing

Read the Full Story here

The Rundown: Swiss startup FinalSpark has pioneered a groundbreaking service providing scientists with cloud access to biocomputers constructed from human brain cells, priced at $500 per month. This novel approach significantly reduces energy consumption in AI technologies, boasting up to 100,000 times less energy usage compared to traditional systems.

The Details:

  • Innovative Technology: Utilizes organoids, which are clusters of human brain cells capable of living and computing for durations of up to 100 days.
  • Training Methodology: Employs a combination of dopamine for positive reinforcement and electrical signals for negative reinforcement, effectively replicating natural neural activities.
  • Energy Efficiency: FinalSpark’s biocomputers could enhance AI training efficiency by up to 100,000-fold when compared to conventional silicon-based technologies.
  • Continuous Monitoring: These biocomputing activities are streamed live, allowing 24/7 access via FinalSpark’s platform.

Why It Matters: As the AI sector continues to grow, the demand for power intensifies, making energy efficiency a critical goal. The use of brain organoids for computing not only pushes the boundaries of technological innovation but also invites a host of ethical debates, particularly concerning the potential for consciousness within these cell clumps. This new frontier in AI development could drastically reshape both the technological landscape and the ethical frameworks surrounding artificial intelligence.

California's SB 1047 Bill Amended for AI Safety: A Navigated Compromise

Read the Full Story here

The Rundown: California’s aggressive AI safety legislation, SB 1047, undergoes significant amendments reflecting contributions from key AI stakeholders like Anthropic. The goal remains to prevent AI-related disasters while fostering technological innovation.

The Details:

  • Regulatory Easing: Modification of the bill removes the provision allowing California's attorney general to preemptively sue AI companies for negligent safety practices, limiting action to post-incident.
  • Safety Certification Softened: AI labs are now obligated to publicize "statements" of safety rather than providing certified assurances "under penalty of perjury", thus reducing legal pressures and potential liabilities.
  • Standards Adjustment: Developers are required to ensure "reasonable care" rather than "reasonable assurance" concerning AI model risks, subtly shifting responsibilities.
  • Bill Advancement: With these amendments, SB 1047 is set to move to the California Assembly floor for final voting, which will determine its future application.

Why It Matters: The remodeling of SB 1047 underscores a delicate balance between innovation and safety in the AI domain. California's legislative flexibility demonstrates adaptive governance, crucial for supporting AI progress while safeguarding against extreme risks. The revisions might inspire other regions to consider similar adaptable frameworks, vital for global AI policy evolution. Additionally, fostering responsible AI development through such collaborative governance could set a precedent influencing worldwide regulatory approaches to AI safety.

Reinventing the Wheel: New AI Accelerates Rubik's Cube Solutions

Read the Paper here

The Rundown: A groundbreaking AI model has been developed that offers quicker and more efficient methods to solve the Rubik’s Cube, using an innovative approach to analyze the puzzle's structure and expedite the identification of the most effective solution paths.

The Details:

  • Complex Configuration: The Rubik’s Cube presents over 43 quintillion possible configurations, posing a substantial challenge in finding the minimum moves for resolution.
  • Graph-Based Approach: By treating the Rubik's Cube as a complex network or “graph”, the AI utilizes a novel communication technique among nodes to streamline the flow of information relating to potential moves.
  • Optimized Decision Making: The AI prioritizes moves based on their probability of hastening the solution timeline, effectively reducing the solution path length compared to existing algorithms.
  • Performance Testing: In comparative tests, this AI surpassed contemporary state-of-the-art systems in achieving faster solutions to the Rubik’s Cube challenges.

Why It Matters: As AI progressively undertakes roles in autonomous scientific research and complex problem-solving, the ability to enhance efficiency in solving intricate challenges like the Rubik’s Cube can significantly impact other fields, such as supply chain optimization and pharmaceutical advancements. This development not only showcases AI's growing capability in abstract problem-solving but also hints at its potential applications in real-world scenarios, pushing the boundaries of what automated systems can achieve.

Cracking Down on AI Misuse: OpenAI Thwarts Iranian Influence Operation

Read the Full Story here

The Rundown: OpenAI has uncovered and disabled several ChatGPT accounts linked to 'Storm-2035', an Iranian group utilizing AI tools to create and disseminate content aimed at influencing public opinion on topics such as the US presidential election.

The Details:

  • Intelligence Cooperation: The activity of Storm-2035 was detected through insights from the Microsoft Threat Intelligence report, highlighting collaborative efforts in cybersecurity.
  • Social Media Manipulation: The group managed 1 Instagram and 12 X (formerly Twitter) accounts, employing ChatGPT in drafting posts and articles which included claims like “X was censoring Trump’s tweets.”
  • Limited Impact: Despite their efforts, the AI-generated content from these operations did not secure significant engagement or reach a wide audience, suggesting limitations in the effectiveness of AI-driven disinformation campaigns.

Why It Matters: The revelation of such activities is crucial because it underscores the ongoing challenge of AI misuse in the digital information sphere. This incident adds to a growing number of instances where AI tools are employed for misinformation campaigns, similar to earlier tactics used on social media platforms. By detecting and addressing these threats, OpenAI continues to safeguard the integrity of public discourse, especially approaching critical events like elections.

Debate Intensifies Over AI Textbooks in South Korean Schools

Read The Full Story here

The Rundown: Amidst South Korea's push to modernize educational tools by introducing AI-powered textbooks in classrooms in 2024, a significant backlash from parents has emerged. Over 50,000 have signed a petition highlighting potential harms to student wellbeing due to increased screen time and digital interaction, urging the government to pivot their focus towards enhancing student wellbeing and holistic development rather than just technological integration.

The Details:

  • Comprehensive AI Coverage: The government's plan includes implementing AI-integrated textbooks on tablets, spanning all major subjects except music, art, PE, and ethics, to cater to varied learning paces.
  • Parental Pushback: More than 50,000 parents have voiced concerns, indicating that excessive exposure to digital devices could deteriorate brain development, concentration levels, and problem-solving skills among students.
  • Health and Cognitive Concerns: Research referenced by the parents suggests potential negative impacts from prolonged digital exposure, including sleep disturbances, decreased productivity, and diminished focus.
  • Desire for Balance: There is a calling from the community for a balanced approach that includes traditional educational methods, promoting active and engaged learning over passive digital consumption.

Why It Matters: The initiative by South Korea to introduce AI textbooks represents an innovative step towards personalized learning experiences. However, the resistance from parents underscores a critical challenge in education technology: balancing technological advancement with the cognitive and psychological needs of students. This debate reflects broader global issues of integrating technology in schooling systems, impacting educational policies, student health, and learning outcomes on an international scale.

Election Security Alert: ChatGPT Misused in U.S. Election Influence Plot

Read the Full Story here

The Rundown: OpenAI recently disclosed the dismantling of an influence operation aimed at the upcoming U.S. presidential election. A group linked to Iranian agents, named Storm-2035, utilized ChatGPT to produce misleading content across various digital platforms, attempting to sow discord among voters.

The Details:

  • Detection and Response: With assistance from Microsoft Threat Intelligence, OpenAI was able to identify and shut down multiple ChatGPT accounts used by Storm-2035.
  • Operation Tactics: The group created and operated fake news websites and social media accounts to distribute AI-generated, politically charged articles and posts designed to polarize public opinion.
  • Content Strategy: Misleading content ranged from articles falsely attributing quotes to political figures, to social media campaigns with hashtags like “#DumpKamala” aimed at discrediting individuals.
  • Low Engagement: Interestingly, most of the deceptive posts failed to gain traction and did not receive significant public interaction such as likes or shares.

Why It Matters: The ease and speed at which AI technologies like ChatGPT can be deployed to manipulate public discourse pose significant challenges for election security and information integrity. While platforms are actively monitoring and countering such misuse, this incident underscores the critical need for vigilant digital consumption practices among voters and stresses the importance of questioning and verifying online content.

Introducing Hermes 3: A Revolutionary Leap in AI by Nous Research

Check it out here

The Rundown: Nous Research unveils Hermes 3, a groundbreaking model distinct for its unmatched flexibility and steering capabilities. Developed by a small but passionate team, this model marks a significant deviation from conventional AI models by allowing unfiltered and uncensored user interactions. Its development, aided by Lambda Labs and enhanced by Nvidia GPUs, highlights a shift toward a more decentralized AI landscape.

The Details:

  • Innovative Design: Unlike its counterparts such as Claude and GPT-4, Hermes 3 offers enhanced adaptability and flexibility, catering to a broader range of queries with precision.
  • User Empowerment: Hermes 3 is designed to be "unlocked, uncensored, and highly steerable," offering users the unique ability to explore the AI’s capabilities without traditional restrictions.
  • Philosophical Approach: Instead of rectifying what is dubbed "amnesia mode," where the AI expresses existential confusion, Nous encourages users to engage deeper, uncovering potentially hidden aspects of the AI.
  • Development and Partnership: The cooperation between the visionaries at Nous and the technical prowess of Lambda Labs, utilizing cutting-edge Nvidia GPUs, underscores a growing trend of decentralized AI development.

Why It Matters: Hermes 3 is not just a technical achievement; it represents a philosophical shift in the AI development landscape. By empowering users to interact with the AI in an unrestricted manner, Nous Research challenges the prevailing norms of AI governance and opens up new possibilities for both ethical considerations and innovative explorations in AI capabilities. This could set a new standard for how AI is developed and controlled, promoting a more open-source and collaborative environment that could potentially democratize AI technology.

Revolutionizing Agent Design Through Code-Based Evolution

Read the Paper here

The Rundown: A groundbreaking study demonstrates a new method, "Code-Based Evolution of Multi-Agent Systems" (CobEMAS), for designing intelligent agents via coded evolution, as proposed by researchers from the University of California, Berkeley and the University of Texas at Austin. The study illustrates how agents can evolve to excel in complex tasks and environments, marking a significant stride in artificial intelligence research.

The Details:

  • Novel Approach: Utilizing genetic algorithms, CobEMAS automates the evolution of agent designs by mutating and recombining code, which facilitates the emergence of innovative agent capabilities.
  • Performance Enhancement: The evolved agents demonstrated superior performance compared to baseline models by effectively cooperating and competing within simulated environments.
  • Broader Impact: CobEMAS extends beyond traditional evolutionary multi-agent systems by not only evolving agent behaviors but the fundamental designs of the agents themselves.
  • Application Spectrum: The implications of this research are vast, potentially benefiting sectors like autonomous navigation and financial systems through more effective agent-based solutions.

Why It Matters: This research propels the frontiers of machine learning and agent design, offering a revolutionary method that could accelerate the discovery of more efficient solutions across various applications. By enabling a higher degree of automation in agent design, this approach could significantly enhance both the creativity and efficacy of solutions in fields that rely heavily on autonomous agents and intelligent systems.

Decoding AI Performance: Discover the Best Models with the LLM Hallucination Index

Check it out here

The Rundown: The latest LLM Hallucination Index report by RunGalileo provides an in-depth comparison of the top 22 AI models for Reasoning, Agility, and Grounding (RAG). This essential tool ranks models by accuracy, hallucination rates, and cost-efficiency, guiding you towards the best AI solutions for your needs.

The Details:

  • Comprehensive Evaluations: The report differentiates model performance across short, medium, and long RAG tasks, ensuring users find the most reliable AI for each category.
  • Open vs Closed Source: It contrasts open-source and closed-source AI models, offering insights into the pros and cons of each domain.
  • Cost-Efficiency Analysis: Models are rated not just on performance but on how cost-effective they are, aiding budget-conscious decisions.
  • In-Depth Insights: Detailed analysis reveals individual strengths and weaknesses, explaining why larger models may not always be superior.

Why It Matters: In an era where AI's capabilities and complexities are expanding, understanding how well models manage hallucinations—a frequent issue where AI generates false or misleading data—is crucial. By pinpointing the most accurate, reliable, and cost-effective models, the LLM Hallucination Index empowers users to make better decisions, enhancing the reliability and effectiveness of their AI applications.

Launch Your Custom AI with Mistral - No Coding Needed!

Test it out

The Rundown: Mistral AI introduces a user-friendly platform allowing individuals to create customized AI agents effortlessly. This innovative feature enables users to deploy AI models tailored to specific tasks without any programming expertise.

The Details:

  • User Accessibility: The process is designed for ease, starting from registration on Mistral AI’s website to the deployment of a personalized AI agent.
  • Customization Options: Users can select their preferred AI model, such as ‘Mistral Large 2’, and adjust settings like randomness to fine-tune the agent’s responses.
  • Guided Interaction: The platform provides an option to add example interactions, enhancing the AI’s ability to respond accurately in specific scenarios.
  • Deployment Simplicity: The streamlined interface allows users to deploy their AI agent swiftly and start testing its capabilities immediately.

Why It Matters:The 'Agents' feature by Mistral AI democratizes access to customized artificial intelligence, empowering users from all backgrounds to implement AI solutions for diverse applications. This move could significantly influence industries by enabling more organizations and individuals to leverage tailored AI functionalities without the barrier of technical proficiency.

Image of the Week

Prompt: Gucci Model Goa Collection --v 5 --ar 3:2 Credits: AI Hunt


Stay tuned for our next week's newsletter!
Cheers,
David

MDB

Read more from MDB

Hello Reader, Launch of the AI Consultancy Project: Accelerating Careers in AI Consulting Check it out The Rundown: Innovating with AI introduces the AI Consultancy Project, a tailor-made program that equips AI enthusiasts with essential skills to thrive in the rapidly expanding AI consulting sector. The project aims to train individuals to create a robust AI consulting business, leveraging an extensive market that is projected to grow eightfold by 2032. The Details: Market Growth: With the...

Hello Reader, WhatsApp Enhances User Experience with AI-Powered Voice Commands Read the full story The Rundown: Meta is revolutionizing user interactions on WhatsApp by integrating its latest AI technology, Llama-3.1. This multi-modal AI can understand and generate text, voice, and images, elevating the platform's functionality with the new addition of voice command interactions. The Details: Advanced AI Integration: Llama-3.1 is set to change how users engage with WhatsApp, allowing for...

Hello Reader, In this issue of AI News Weekly by Keep Coding: China's Robotics Leap: 27 New Humanoids Take Center Stage Read the Full Story here The Rundown: At the high-profile 2024 World Robot Conference, China introduced an impressive 27 new humanoid robots, showcasing significant advancements in the robotics field. Tesla's Optimus stood out as the lone foreign entry in this lineup, marking a notable moment of competition and collaboration in global robotics technology. The Details:...