A critical part of OpenAI’s safeguarding process is “red teaming” — a structured methodology using both human and AI participants to explore potential risks and vulnerabilities in new systems.
Historically, OpenAI has engaged in red teaming efforts predominantly through manual testing, which involves individuals probing for weaknesses. This was notably employed during the testing of their DALL·E 2 image generation model in early 2022, where external experts were invited to identify potential risks. Since then, OpenAI has expanded and refined its methodologies, incorporating automated and mixed approaches for a more comprehensive risk assessment.
“We are optimistic that we can use more powerful AI to scale the discovery of model mistakes,” OpenAI stated. This optimism is rooted in the idea that automated processes can help evaluate models and train them to be safer by recognising patterns and errors on a larger scale.
In their latest push for advancement, OpenAI is sharing two important documents on red teaming — a white paper detailing external engagement strategies and a research study introducing a novel method for automated red teaming. These contributions aim to strengthen the process and outcomes of red teaming, ultimately leading to safer and more responsible AI implementations.
As AI continues to evolve, understanding user experiences and identifying risks such as ****** and misuse are crucial for researchers and developers. Red teaming provides a proactive method for evaluating these risks, especially when supplemented by insights from a range of independent external experts. This approach not only helps establish benchmarks but also facilitates the enhancement of safety evaluations over time.
The human touch
OpenAI has shared four fundamental steps in their white paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” to design effective red teaming campaigns:
Composition of red teams: The selection of team members is based on the objectives of the campaign. This often involves individuals with diverse perspectives, such as expertise in natural sciences, cybersecurity, and regional politics, ensuring assessments cover the necessary breadth.
Access to model versions: Clarifying which versions of a model red teamers will access can influence the outcomes. Early-stage models may reveal inherent risks, while more developed versions can help identify gaps in planned safety mitigations.
Guidance and documentation: Effective interactions during campaigns rely on clear instructions, suitable interfaces, and structured documentation. This involves describing the models, existing safeguards, testing interfaces, and guidelines for recording results.
Data synthesis and evaluation: Post-campaign, the data is assessed to determine if examples align with existing policies or require new behavioural modifications. The assessed data then informs repeatable evaluations for future updates.
A recent application of this methodology involved preparing the OpenAI o1 family of models for public use—testing their resistance to potential misuse and evaluating their application across various fields such as real-world ******* planning, natural sciences, and AI research.
Automated red teaming
Automated red teaming seeks to identify instances where AI may fail, particularly regarding safety-related issues. This method excels at scale, generating numerous examples of potential errors quickly. However, traditional automated approaches have struggled with producing diverse, successful ******* strategies.
OpenAI’s research introduces “Diverse And Effective Red Teaming With Auto-Generated Rewards And Multi-Step Reinforcement Learning,” a method which encourages greater diversity in ******* strategies while maintaining effectiveness.
This method involves using AI to generate different scenarios, such as illicit advice, and training red teaming models to evaluate these scenarios critically. The process rewards diversity and efficacy, promoting more varied and comprehensive safety evaluations.
Despite its benefits, red teaming does have limitations. It captures risks at a specific point in time, which may evolve as AI models develop. Additionally, the red teaming process can inadvertently create information hazards, potentially alerting malicious actors to vulnerabilities not yet widely known. Managing these risks requires stringent protocols and responsible disclosures.
While red teaming continues to be pivotal in risk discovery and evaluation, OpenAI acknowledges the necessity of incorporating broader public perspectives on AI’s ideal behaviours and policies to ensure the technology aligns with societal values and expectations.
See also: EU introduces draft regulatory guidance for AI models
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post OpenAI enhances AI safety with new red teaming methods appeared first on AI News.
View the full article
In 2024, Big Tech is all-in on artificial intelligence, with companies like Microsoft, Amazon, Alphabet, and Meta leading the way.
Their combined spending on AI is projected to exceed a jaw-dropping $240 billion. Why? Because AI isn’t just the future—it’s the present, and the demand for AI-powered tools and infrastructure has never been higher. The companies aren’t just keeping up; they’re setting the pace for the industry.
The scale of their investment is hard to ignore. In the first half of 2023, tech giants poured $74 billion into capital expenditure. By Q3, that number had jumped to $109 billion. In mid-2024, spending reached $104 billion, a remarkable 47% rise over the same ******* a year earlier. By Q3, the total hit $171 billion.
If this pattern continues, Q4 might add another $70 billion, bringing the total to a truly staggering $240 billion for the year.
Why so much spending?
AI’s potential is immense, and companies are making sure they’re positioned to reap the rewards.
A growing market: AI is projected to create $20 trillion in global economic impact by 2030. In countries like India, AI could contribute $500 billion to GDP by 2025. With stakes this high, big tech isn’t hesitating to invest heavily.
Infrastructure demands: Training and running AI models require massive investment in infrastructure, from data centres to high-performance GPUs. Alphabet increased its capital expenditures by 62% last quarter compared to the previous year, even as it cut its workforce by 9,000 employees to manage costs.
Revenue potential: AI is already proving its value. Microsoft’s AI products are expected to generate $10 billion annually—the fastest-growing segment in the company’s history. Alphabet, meanwhile, uses AI to write over 25% of its new code, streamlining operations.
Amazon is also ramping up, with plans to spend $75 billion on capital expenditure in 2024. Meta’s forecast is not far behind, with estimates between $38 and $40 billion. Across the board, organisations recognise that maintaining their edge in AI requires sustained and significant investment.
Supporting revenue streams
What keeps the massive investments keep on coming is the strength of big tech’s core businesses. Last quarter, Alphabet’s digital advertising machine, which is powered by Google’s search engine, generated $49.39 billion in ad revenue, a 12% year-over-year increase. This as a solid foundation that allows Alphabet to pour resources into building out its AI arsenal without destabilising the bottom line.
Microsoft’s diversified revenue streams are another example. While the company spent $20 billion on AI and cloud infrastructure last quarter, its productivity segment, which includes Office, grew by 12% to $28.3 billion, and its personal computing business, boosted by Xbox and the Activision Blizzard acquisition, grew 17% to $13.2 billion. These successes demonstrate how AI investments can support broader growth strategies.
The financial payoff
Big tech is already seeing the benefits of its heavy spending. Microsoft’s Azure platform has seen substantial growth, with its AI income approaching $6 billion. Amazon’s AI business is growing at triple-digit rates, and Alphabet reported a 34% jump in profits last quarter, with cloud revenue playing a major role.
Meta, while primarily focused on advertising, is leveraging AI to make its platforms more engaging. AI-driven tools, such as improved feeds and search features keep users on its platforms longer, resulting in new revenue growth.
AI spending shows no signs of slowing down. Tech leaders at Microsoft and Alphabet view AI as a long-term investment critical to their future success. And the results speak for themselves: Alphabet’s cloud revenue is up 35%, while Microsoft’s cloud business grew 20% last quarter.
For the time being, the focus is on scaling up infrastructure and meeting demand. However, the real transformation will come when big tech unlocks AI’s full potential, transforming industries and redefining how we work and live.
By investing in high-quality, centralised data strategies, businesses can ensure trustworthy and accurate AI implementations, and unlock AI’s full potential to drive innovation, improve decision-making, and gain competitive edge. AI’s revolutionary promise is within reach—but only for companies prepared to lay the groundwork for sustainable growth and long-term results.
(Photo by Unsplash)
See also: Microsoft tries to convert Google Chrome users
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Big tech’s AI spending hits new heights appeared first on AI News.
View the full article
Samsung has revealed Gauss2, the second-generation proprietary AI model set to significantly enhance user experiences.
Paul Kyungwhoon Cheun, President and CTO of the Device eXperience (DX) Division, commented: “Samsung Electronics is committed to developing cutting-edge software, including AI and data analytics, to enhance user experiences.
“With three distinct models, Samsung Gauss2 is already boosting our internal productivity, and we plan to integrate it into products to deliver higher levels of convenience and personalisation.”
Samsung Gauss2: Multimodal AI
Gauss2 builds upon last year’s internal generative AI model, showcasing advancements in handling language, code, and images. It comes in three variants: Compact, Balanced, and Supreme, tailored to different computational needs.
Compact: Optimised for environments with limited computing resources, ensuring effective performance even on-device.
Balanced: Strikes a balance among performance, speed, and efficiency; suitable for a variety of tasks.
Supreme: Incorporates Mixture of Experts (MoE) technology to minimise computational costs while maximising efficiency and performance.
Gauss2 is designed to support between nine and fourteen languages, depending on the model, and a range of programming languages.
Samsung’s custom training techniques and tokeniser aim to deliver peak efficiency across supported languages. The new models promise response generation speeds 1.5 to 3 times faster than leading open-source alternatives, facilitating prompt AI interaction and minimising wait times.
Customisation and deployment
Samsung’s in-house generative AI model allows easier customisation for specific applications.
Gauss2 supports diverse productivity tasks, with widespread adoption among Samsung developers. The coding assistant ‘code.i’ – enhanced by Gauss2 – is utilised extensively within the DX Division and by international research teams, with up to 60% of developers now engaging regularly.
The Gauss Portal, another Gauss-powered AI service, enhances productivity through features such as document summarisation and translation. Since its launch, this AI service has expanded internationally, aiding various office tasks. As of August, call centre operations also benefit from AI-driven categorisation and summarisation.
Going forward, Samsung aims to further boost internal productivity, improving services like code.i and enhancing the Gauss Portal’s natural language question-and-answer capabilities. Future functionalities will include multimodal operations, like chart analysis and image creation.
‘AI for All’
Samsung’s strategy, “AI for All,” envisions widespread incorporation of AI-based services across its product lines. By combining AI with knowledge graph technology, Samsung anticipates delivering even greater personalisation.
Besides Gauss2, the conference featured presentations on software development, including the SmartThings platform’s customer experience improvements.
Attendees engaged in 29 technical sessions covering:
The future of healthcare developments within Samsung’s health ecosystem.
Enhancements in the SmartThings experience through generative AI.
Insights into the code.i AI coding assistant.
Lifestyle content innovations for TV.
AI solutions for Samsung’s home appliances.
SDC24 Korea underscored Samsung’s vision to integrate AI across its ecosystem.
(Image Credit: Samsung)
See also: EU introduces draft regulatory guidance for AI models
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Samsung unveils Gauss2 AI model at SDC24 Korea appeared first on AI News.
View the full article
Discover how chatbots for marketing can boost your ROI with enhanced engagement and instant customer responses.
What are chatbots?
Chatbots are automated software applications designed to simulate human conversation. They interact with users through text or voice, providing immediate responses and performing various tasks. AI chatbots can understand and process natural language, enabling them to handle complex queries and provide relevant information or services.
Chatbots come in various forms, including:
Rule-based chatbots: Respond to specific commands predetermined by developers,
AI-driven chatbots: Use machine learning and natural language processing (NLP) to understand and adapt to user queries.
The importance of chatbots in marketing
Chatbots have become an essential component in modern marketing strategies. They offer marketers a way to connect with consumers efficiently, enhance customer experience, and streamline interactions.
Enhanced customer engagement: Chatbots engage customers by providing instant responses and personalised interactions,
24/7 availability: They operate around the clock, ensuring customer inquiries are addressed at any time,
Cost-effectiveness: Reduce the need for extensive human customer support, lowering operational costs,
Data collection: Gather valuable customer data and insights for better marketing strategies,
Increased lead generation: Facilitate the collection of leads by interacting with potential customers and capturing their information.
Chatbots play a crucial role in improving customer engagement. They provide a dynamic and interactive way for businesses to communicate with their audience, fostering stronger relationships and increasing satisfaction. Chatbots can quickly address common questions, offer recommendations, and guide customers through the purchasing process, creating a more personalised experience.
One of the most valuable features of chatbots is their ability to operate around the clock. Unlike human support teams, chatbots are available 24/7, ensuring customers receive immediate assistance regardless of the time of day. Continuous availability can significantly enhance the customer experience, leading to higher levels of satisfaction and loyalty.
The implementation of chatbots in marketing strategies not only streamlines operations but also delivers a more consistent and fulfilling customer experience. The result is an increase in engagement, satisfaction, and overall effectiveness in marketing campaigns.
Implementing chatbots in marketing strategies
Integrating chatbots into marketing strategies can significantly enhance customer engagement and streamline overall marketing efforts. This section delves into how chatbots can be used for personalised customer interactions and how they facilitate data collection and analysis.
Chatbots offer a unique opportunity to create personalised interactions with customers. By using AI and machine learning, chatbots can tailor responses based on user behaviours, preferences, and past interactions. The personalised approach can make customers feel valued and understood, increasing their satisfaction and loyalty.
Key aspects of personalised customer interactions:
Greeting users by name: Addressing customers by their names makes interactions feel more personal and engaging,
Tailored recommendations: Based on previous interactions, chatbots can recommend products, content, or services that align with individual preferences,
Customising responses: Chatbots can adjust their responses based on the user’s mood, choices, and patterns.
Chatbots are important in gathering and analysing customer data. The interactions between users and chatbots generate valuable insights that can be used to optimise marketing strategies. Collecting this data can help businesses understand customer needs, behaviour patterns, and preferences.
Key areas where chatbots aid in data collection and analysis:
User interaction history: Chatbots store conversation logs, providing insights into common customer queries and issues,
Demographic data: Collecting information like age, location, and interests helps in segmenting the audience for targeted marketing,
Feedback mechanisms: Gathering feedback directly through chatbot interactions allows businesses to gauge customer satisfaction and areas for improvement.
In the realm of marketing, using chatbots can significantly boost return on investment (ROI). Two of the primary factors contributing to this increase are cost-effectiveness and enhanced lead generation and conversions.
Chatbots offer substantial cost savings for businesses. By automating customer interactions, businesses can reduce the need for a large customer service team. This not only lowers operational costs but also streamlines processes. The initial investment in chatbot technology is often offset by the long-term savings achieved through decreased labour costs and increased efficiency.
Also, chatbots can play a critical role in lead generation and conversion rates. By engaging users in real-time, chatbots can qualify leads, provide personalised recommendations, and guide users through the sales funnel. Immediate interaction can improve user experience and lead to higher conversion rates.
Utilising chatbots for marketing can lead to higher efficiency, greater customer interaction, and ultimately a better ROI. By understanding and implementing these technologies, businesses can enhance their marketing strategies and achieve substantial financial benefits.
The post Boost your ROI: The impact of chatbots on marketing appeared first on AI News.
View the full article
AI is rapidly becoming ubiquitous across business systems and IT ecosystems, with adoption and development racing faster than anyone could have expected. Today it seems that everywhere we turn, software engineers are building custom models and integrating AI into their products, as business leaders incorporate AI-powered solutions in their working environments.
However, uncertainty about the best way to implement AI is stopping some companies from taking action. Boston Consulting Group’s latest Digital Acceleration Index (DAI), a global survey of 2,700 executives, revealed that only 28% say their organisation is fully prepared for new AI regulation.
Their uncertainty is exacerbated by AI regulations arriving thick and fast: the EU AI act is on the way; Argentina released a draft AI plan; Canada has the AI and Data Act; China has enacted a slew of AI regulations; and the G7 nations launched the “Hiroshima AI process.” Guidelines abound, with the OECD developing AI principles, the UN proposing a new UN AI advisory body, and the Biden administration releasing a blueprint for an AI Bill of Rights (although that could quickly change with the second Trump administration).
Legislation is also coming in individual US states, and is appearing in many industry frameworks. To date, 21 states have enacted laws to regulate AI use in some manner, including the Colourado AI Act, and clauses in California’s CCPA, plus a further 14 states have legislation awaiting approval.
Meanwhile, there are loud voices on both sides of the AI regulation debate. A new survey from SolarWinds shows 88% of IT professionals advocate for stronger regulation, and separate research reveals that 91% of British people want the government to do more to hold businesses accountable for their AI systems. On the other hand, the leaders of over 50 tech companies recently wrote an open letter calling for urgent reform of the EU’s heavy AI regulations, arguing that they stifle innovation.
It’s certainly a tricky ******* for business leaders and software developers, as regulators scramble to catch up with tech. Of course you want to take advantage of the benefits AI can provide, you can do so in a way that sets you up for compliance with whatever regulatory requirements are coming, and don’t handicap your AI use unnecessarily while your rivals speed ahead.
We don’t have a crystal ball, so we can’t predict the future. But we can share some best practices for setting up systems and procedures that will prepare the ground for AI regulatory compliance.
Map out AI usage in your wider ecosystem
You can’t manage your team’s AI use unless you know about it, but that alone can be a significant challenge. Shadow IT is already the scourge of cybersecurity teams: Employees sign up for SaaS tools without the knowledge of IT departments, leaving an unknown number of solutions and platforms with access to business data and/or systems.
Now security teams also have to grapple with shadow AI. Many apps, chatbots, and other tools incorporate AI, machine learning (ML), or natural language programming (NLP), without such solutions necessarily being obvious AI solutions. When employees log into these solutions without official approval, they bring AI into your systems without your knowledge.
As Opice Blum’s data privacy expert Henrique Fabretti Moraes explained, “Mapping the tools in use – or those intended for use – is crucial for understanding and fine-tuning acceptable use policies and potential mitigation measures to decrease the risks involved in their utilisation.”
Some regulations hold you responsible for AI use by vendors. To take full control of the situation, you need to map all the AI in your, and your partner organisations’ environments. In this regard, using a tool like Harmonic can be instrumental in detecting AI use across the supply chain.
Verify data governance
Data privacy and security are core concerns for all AI regulations, both those already in place and those on the brink of approval.
Your AI use already needs to comply with existing privacy laws like GDPR and CCPR, which require you to know what data your AI can access and what it does with the data, and for you to demonstrate guardrails to protect the data AI uses.
To ensure compliance, you need to put robust data governance rules into place in your organisation, managed by a defined team, and backed up by regular audits. Your policies should include due diligence to evaluate data security and sources of all your tools, including those that use AI, to identify areas of potential bias and privacy risk.
“It is incumbent on organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts,” said Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds. “This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI.”
Establish continuous monitoring for your AI systems
Effective monitoring is crucial for managing any area of your business. When it comes to AI, as with other areas of cybersecurity, you need continuous monitoring to ensure that you know what your AI tools are doing, how they are behaving, and what data they are accessing. You also need to audit them regularly to keep on top of AI use in your organisation.
“The idea of using AI to monitor and regulate other AI systems is a crucial development in ensuring these systems are both effective and ethical,” said Cache Merrill, founder of software development company Zibtek. “Currently, techniques like machine learning models that predict other models’ behaviours (meta-models) are employed to monitor AI. The systems analyse patterns and outputs of operational AI to detect anomalies, biases or potential failures before they become critical.”
Cyber GRC automation platform Cypago allows you to run continuous monitoring and regulatory audit evidence collection in the background. The no-code automation allows you to set custom workflow capabilities without technical expertise, so alerts and mitigation actions are triggered instantly according to the controls and thresholds you set up.
Cypago can connect with your various digital platforms, synchronise with virtually any regulatory framework, and turn all relevant controls into automated workflows. Once your integrations and regulatory frameworks are set up, creating custom workflows on the platform is as simple as uploading a spreadsheet.
Use risk assessments as your guidelines
It’s vital to know which of your AI tools are high risk, medium risk, and low risk – for compliance with external regulations, for internal business risk management, and for improving software development workflows. High risk use cases will need more safeguards and evaluation before deployment.
“While AI risk management can be started at any point in the project development,” Ayesha Gulley, an AI policy expert from Holistic AI, said. “Implementing a risk management framework sooner than later can help enterprises increase trust and scale with confidence.”
When you know the risks posed by different AI solutions, you can choose the level of access you’ll grant them to data and critical business systems.
In terms of regulations, the EU AI Act already distinguishes between AI systems with different risk levels, and NIST recommends assessing AI tools based on trustworthiness, social impact, and how humans interact with the system.
Proactively set AI ethics governance
You don’t need to wait for AI regulations to set up ethical AI policies. Allocate responsibility for ethical AI considerations, put together teams, and draw up policies for ethical AI use that include cybersecurity, model validation, transparency, data privacy, and incident reporting.
Plenty of existing frameworks like NIST’s AI RMF and ISO/IEC 42001 recommend AI best practices that you can incorporate into your policies.
“Regulating AI is both necessary and inevitable to ensure ethical and responsible use. While this may introduce complexities, it need not hinder innovation,” said Arik Solomon, CEO and co-founder of Cypago. “By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.”
Companies that can demonstrate a proactive approach to ethical AI will be better positioned for compliance. AI regulations aim to ensure transparency and data privacy, so if your goals align with these principles, you’ll be more likely to have policies in place that comply with future regulation. The FairNow platform can help with this process, with tools for managing AI governance, bias checks, and risk assessments in a single location.
Don’t let ***** of AI regulation hold you back
AI regulations are still evolving and emerging, creating uncertainty for businesses and developers. But don’t let the fluid situation stop you from benefiting from AI. By proactively implementing policies, workflows, and tools that align with the principles of data privacy, transparency, and ethical use, you can prepare for AI regulations and take advantage of AI-powered possibilities.
The post Preparing today for tomorrow’s AI regulations appeared first on AI News.
View the full article
Business Insider’s “CXO AI Playbook” looks at how firms are utilising AI to tackle challenges, scale operations, and plan for the future.
The Playbook looks at stories from various industries to see what problems AI is solving, who’s driving these initiatives, and how it’s reshaping strategies.
Salesforce, well known for its CRM software used by over 150,000 companies like Amazon and Walmart, is no stranger to innovation. It also owns Slack, the popular workplace communication app. Salesforce is now stepping up its AI game with Agentforce, a platform that lets businesses to build and deploy digital agents to automate tasks such as creating sales reports and summarising Slack conversations.
What problem is it solving?
Salesforce has been working with AI for years. In 2016, it launched Einstein, an AI feature baked into its CRM platform. Einstein handled basic scriptable tasks, but the rise of generative AI brought a chance to do more. Smarter tools could now make better decisions and understand natural language.
This sparked a transformation. First came Einstein GPT, then Einstein Copilot, and now Agentforce—a platform designed for flexibility with prebuilt and customisable agents to handle diverse business needs.
“Our customers wanted more. Some wanted to tweak the agents we offer, while others wanted to create their own,” said Tyler Carlson, Salesforce’s VP of Business Development.
The tech behind it
Agentforce is powered by Salesforce’s Atlas Reasoning Engine, developed in-house. The platform connects with AI models from major players like OpenAI, Anthropic, Amazon, and Google, giving businesses access to a variety of tools.
Slack has become a testing ground for these AI agents. Currently in beta, Agentforce’s Slack integration puts automations where employees already spend their time. “Slack makes these tools easy to use and accessible,” Carlson added.
Smarter, more flexible AI
Agentforce uses ReAct prompting, a technique that helps agents break down problems into smaller steps and adjust their approach as they go. This leads to more accurate responses and hands-off task management, from answering questions to scheduling meetings.
Agentforce works with Salesforce’s proprietary LLMs and third-party models, giving clients plenty of options. To ensure security, Salesforce enforces strict data privacy policies, including limits on data retention.
Making it work for businesses
With tools like Agentbuilder, companies can design AI agents tailored to their needs. For example, an agent could sort emails or answer specific HR questions using internal data. One example is Salesforce’s collaboration with Workday to create an AI service agent for employee queries.
Salesforce is already seeing results, with Agentforce resolving 90% of customer inquiries in early trials. The goal? Broader adoption, more capabilities, and higher workloads handled by these agents.
“We’re building a ******* ecosystem of partners and skills,” Carlson said. “By next year, we want Agentforce to be a must-have for businesses.”
(Photo by Unsplash)
See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Salesforce launches AI platform for automated task management appeared first on AI News.
View the full article
A report by Publicis Sapient sheds light on the disparities between the C-suite and practitioners, dubbed the “V-suite,” in their perceptions and adoption of generative AI.
The report reveals a stark contrast in how the C-suite and V-suite view the potential of generative AI. While the C-suite focuses on visible use cases such as customer experience, service, and sales, the V-suite sees opportunities across various functional areas, including operations, HR, and finance.
Risk perception
The divide extends to risk perception as well. Fifty-one percent of C-level respondents expressed more concern about the risk and ethics of generative AI than other emerging technologies. In contrast, only 23 percent of the V-suite shared these worries.
Simon James, Managing Director of Data & AI at Publicis Sapient, said: “It’s likely the C-suite is more worried about abstract, big-picture dangers – such as Hollywood-style scenarios of a rapidly-evolving superintelligence – than the V-suite.”
The report also highlights the uncertainty surrounding generative AI maturity. Organisations can be at various stages of maturity simultaneously, with many struggling to define what success looks like. More than two-thirds of respondents lack a way to measure the success of their generative AI projects.
Navigating the generative AI landscape
Despite the C-suite’s focus on high-visibility use cases, generative AI is quietly transforming back-office functions. More than half of the V-suite respondents ranked generative AI as extremely important in areas like finance and operations over the next three years, compared to a smaller percentage of the C-suite.
To harness the full potential of generative AI, the report recommends a portfolio approach to innovation projects. Leaders should focus on delivering projects, controlling shadow IT, avoiding duplication, empowering domain experts, connecting business units with the CIO’s office, and engaging the risk office early and often.
Daniel Liebermann, Managing Director at Publicis Sapient, commented: “It’s as hard for leaders to learn how individuals within their organisation are using ChatGPT or Microsoft Copilot as it is to understand how they’re using the internet.”
The path forward
The report concludes with five steps to maximise innovation: adopting a portfolio approach, improving communication between the CIO’s office and the risk office, seeking out innovators within the organisation, using generative AI to manage information, and empowering team members through company culture and upskilling.
As generative AI continues to evolve, organisations must bridge the gap between the C-suite and V-suite to unlock its full potential. The future of business transformation ***** in harnessing the power of a decentralised, bottom-up approach to innovation.
See also: EU introduces draft regulatory guidance for AI models
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Generative AI: Disparities between C-suite and practitioners appeared first on AI News.
View the full article
The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models.
The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation:
Working Group 1: Transparency and copyright-related rules
Working Group 2: Risk identification and assessment for systemic risk
Working Group 3: Technical risk mitigation for systemic risk
Working Group 4: Governance risk mitigation for systemic risk
The draft is aligned with existing laws such as the Charter of Fundamental Rights of the ********* Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes.
Key objectives outlined in the draft include:
Clarifying compliance methods for providers of general-purpose AI models
Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products
Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training
Continuously assessing and mitigating systemic risks associated with AI models
Recognising and mitigating systemic risks
A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant.
As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle.
The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.
Taking a proactive stance to AI regulatory guidance
The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability.
As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology.
While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.
This draft is open for written feedback until 28 November 2024.
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post EU introduces draft regulatory guidance for AI models appeared first on AI News.
View the full article
Microsoft Edge has evolved into more than simply a browser; it is a critical component of Microsoft’s ecosystem, meant to integrate smoothly with Windows and highlight the company’s latest innovations, such as its AI assistant, Copilot.
While these interconnections make Edge a viable choice, Microsoft’s methods for persuading consumers to choose it have been far from covert.
From default settings that prioritise Edge to persistent prompts at startup, Microsoft has made it clear they want Edge to be the go-to for Windows users. And lately, it’s upped the ante: now, Edge can launch automatically when your computer boots up, instantly nudging you to bring over your data from other browsers.
The most recent update includes an auto-checked option to import browsing data from Chrome, such as history, bookmarks, and open tabs, in the name of users leveraging the features of AI assistant, Copilot. Although AI features may be appealing to some, the aggressive approach has left many users feeling annoyed rather than tempted.
The Verge recently noticed that when you start up your PC, Edge might decide to open on its own, promptly displaying a pop-up for its AI assistant, Copilot. Right next to Copilot, there’s a conveniently checked box allowing Edge to import data from other browsers automatically. For some users, this seems like an overreach, raising doubts about how far Microsoft is ready to go to make Edge the browser of choice.
Microsoft has confirmed this setup and stated that customers have the option to opt-out. Still, with default settings that favour data imports and an eye-catching import button, it’s easy for users to unintentionally make the switch, especially if they’re not paying attention. For those who prefer sticking with their existing browsers without interruption, the approach can feel unwelcome.
But even if users dodge the pop-ups, Edge isn’t exactly shy. Uninstalling it is a complex process, and it often gets reinstalled by Windows updates, much to the frustration of users who would rather go without. For many, this persistence feels more like a forceful sales pitch rather than a friendly suggestion.
Interestingly, this isn’t the first time Microsoft has tried this type of strategy. A similar message appeared to users earlier this year but was pulled back after strong objections. Now, it’s back, with Microsoft’s Caitlin Roulston stating the notification is meant to “give users the choice to import data from other browsers.”
In fact, Microsoft’s bold tactics go back some years. In 2022, it introduced a feature that could automatically pull data from Chrome into Edge – although users had the option to decline. In 2021, the company made it practically impossible to set any browser other than Edge as the default, resulting in enough outcry for Microsoft to back down.
While Microsoft promotes its intrusive pop-ups as a way to give users more control, others who value choice without constant nudges. The relentless push for Edge usage could actually be detrimental, as the company’s persistence may drive users toward other browsers rather than away. To truly compete, Microsoft might benefit from letting Edge’s strengths speak for themselves rather than relying on aggressive prompts to change hearts and minds.
(Photo by Surface)
See also: EU probes Microsoft-OpenAI and Google-Samsung AI deals
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Microsoft tries to convert Google Chrome users appeared first on AI News.
View the full article
Did you know that effective asset management practices pose challenges for almost half of small businesses? According to the latest research, 43% of businesses either manually report their inventory or in a few cases, do not record assets in any manner.
However, asset management is not immune to the disruptive pressure of artificial intelligence (AI) currently revolutionising numerous industries. The manner in which corporations manage their tangible and intangible assets is undergoing a profound transformation due to the evolving technology of AI. This blog will discover how AI-driven fixed asset software softwares transform asset management and what the future holds for businesses embedding those innovations.
Introduction to fixed asset management and AI
Fixed asset management is a critical feature for organisations to manage, control, and optimise the value of their physical assets. Assets can include everything from equipment and vehicles to home computer systems. Traditionally, manual asset management systems entail manual report maintenance and periodic audits, which can be time-consuming and susceptible to human error.
AI-driven fixed assets software offers a modern solution by automating diverse asset control factors. This guarantees accuracy, reduces administrative overhead, and increases an asset’s useful life, ultimately contributing to significant cost savings. AI, blended with the Internet of Things (IoT), machine learning (ML), and predictive analytics, is the primary method to develop smart, efficient, and scalable asset management solutions.
The predictive capacities of AI revolutionise proactive asset management. AI can predict when a piece of hardware is likely to fail or spot chances for optimisation by evaluating patterns and trends in data. The proactive strategy not only helps with strategic planning but also ensures the reliability of operations by preventing system outages that can cause serious disruptions to business operations and financial losses. Businesses may use AI to ensure their assets operate at peak efficiency, quickly adopt new technologies, and match operations to corporate goals.
AI’s advantages for fixed asset software
AI-driven fixed asset software has numerous advantages for businesses, particularly in sectors where asset management is vital to daily operations, like production, healthcare, and logistics.
Greater effectiveness: Automation significantly speeds up asset tracking, control, and upkeep. As AI can assess huge amounts of information in real time, managers can respond immediately to determine the state of their assets.
Cost savings: Ongoing asset utilisation and predictive analysis can result in lower operating costs. AI is capable of identifying underutilised or poorly functioning items, which may assist corporations in saving money by reallocating or disposal schedules.
Enhanced compliance and reporting: Staying compliant can be challenging with increasingly stringent regulatory governance. AI ensures that compliance reports are generated accurately and on time. Moreover, the software can routinely modify asset data to mirror regulatory changes, ensuring that companies consistently comply with laws.
Improved decision-making: With AI’s analytics capabilities, managers can make better choices about which assets to invest in, when to repair, and when to retire an asset. Selections are based on real-time information and predictive models instead of guesswork or manual calculations.
Case study: Predictive portfolio management precision issue:
Predicting market trends and real-time portfolio optimisation was complicated for a top asset management company. Conventional approaches could not keep up with market demands, resulting in lost opportunities and less-than-ideal results.
Solution:
The company was able to quickly evaluate large datasets by implementing an AI-powered predictive analytics system. The AI algorithms examined market patterns, assessed risk factors, and dynamically altered the portfolio. The end result was a notable improvement in portfolio performance and increased forecasting accuracy.
Findings:
A 20% boost in portfolio returns was attained.
Real-time market trend information improved decision-making.
The future of AI in asset management
The future of asset management will revolutionise customer satisfaction, operational effectiveness, and decision-making. Below are the important elements that will transform asset management operations:
1) Elevated decision making
By revealing hidden patterns from huge datasets, AI will permit asset managers to make better decisions. AI can evaluate the whole portfolio, compiling financial statistics and market news, which together will improve risk posture and portfolio formulation. AI will also make real-time adaptation feasible, preparing managers for future predictions and staying ahead of marketplace swings.
2) Automation and operational efficiency
Robo-advisors will become necessary tools, autonomously managing tasks like portfolio rebalancing and standard operations. AI’s algorithmic training will ******** decisions quickly, decreasing human intervention and cutting costs. AI will automate tedious back-office operations, including data entry and regulatory compliance procedures, ensuring smooth, streamlined workflows.
3) Client experience transformation
In the future, client interactions will become customised and more responsive. AI will analyse purchaser information to provide tailored funding recommendations, and AI-powered chatbots will be available 24/7 to answer queries. The technology can even simplify reporting, turning complex economic information into easily digestible, jargon-free insights, building trust and transparency in customer relationships.
Conclusion:
The future of asset management is undeniably tied to improvements in AI technology. AI-driven fixed asset software is already impacting asset monitoring, predictive analytics, and risk management by optimisation and automation. As hyper automation and IoT continue to adapt, the possibilities for remodeling asset management are limitless.
(Photo source)
The post Using AI technologies for future asset management appeared first on AI News.
View the full article
Japan is on a mission to become a global AI powerhouse, and it’s starting with some impressive advances in AI-driven language models. ********* technology experts are developing advanced models that grasp the unique nuances of the ********* language and culture—essential for industries such as healthcare, finance, and manufacturing – where precision is key.
But this effort isn’t Japan’s alone. Consulting giants like Accenture, Deloitte, EY Japan, FPT, Kyndryl, and TCS Japan are partnering with NVIDIA to create AI innovation hubs across the country. The centres are using NVIDIA’s AI software and specialised ********* language models to build tailored AI solutions, helping industries boost productivity in a digital workforce. The goal? To get ********* companies fully on board with enterprise and physical AI.
One standout technology supporting the drive is NVIDIA’s Omniverse platform. With Omniverse, ********* companies can create digital twins—virtual replicas of real-world assets—and test complex AI systems safely before implementing them. This is a game-changer for industries such as manufacturing and robotics, allowing businesses to fine-tune processes without the risk of real-world trial and error. This use of AI is more than just innovation; it represents Japan’s plan for addressing some major challenges ahead.
Japan faces a shrinking workforce presence as its population ages. With its strengths in robotics and automation, Japan is well-positioned to use AI solutions to bridge the gap. In fact, Japan’s government recently shared its vision of becoming “the world’s most AI-friendly country,” underscoring the perceived role AI will play in the nation’s future.
Supporting this commitment, Japan’s AI market hit $5.9 billion in value this year; a 31.2% growth rate according to IDC. New AI-focused consulting centres in Tokyo and Kansai give ********* businesses hands-on access to NVIDIA’s latest technologies, equipping them to solve social challenges and aid economic growth.
Top cloud providers like SoftBank, GMO Internet Group, KDDI, Highreso, Rutilea, and SAKURA Internet are also involved, working with NVIDIA to build AI infrastructure. Backed by Japan’s Ministry of Economy, Trade and Industry, they’re establishing AI data centres across Japan to accelerate growth in robotics, automotive, healthcare, and telecoms.
NVIDIA and SoftBank have also formed a remarkable partnership to build Japan’s most powerful AI supercomputer using NVIDIA’s Blackwell platform. Additionally, SoftBank has tested the world’s first AI and 5G hybrid telecoms network with NVIDIA’s AI Aerial platform, allowing Japan to set a worldwide standard. With these developments, Japan is taking big strides toward establishing itself as a leader in the AI-powered industrial revolution.
(Photo by Andrey Matveev)
See also: NVIDIA’s share price nosedives as antitrust clouds gather
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post NVIDIA AI Summit Japan: NVIDIA’s role in Japan’s big AI ambitions appeared first on AI News.
View the full article
AI continues to transform industries, and having the right skills can make a significant difference to your career. Professionals wishing to get into this evolving field can take advantage of a variety of specialised courses that teach how to use AI in business, creativity, and data analysis.
Artificial Intelligence: Preparing Your Career for AI
Artificial Intelligence: Preparing Your Career for AI is an option for those wanting to future-proof their careers in an AI-centric workplace. The course outlines five essential steps for preparing for AI’s impact on job roles and skill requirements. Participants learn the basics of AI, strategies for aligning their career paths with AI advancements, and how to use AI responsibly.
The course is ideal for individuals at any career stage who wish to understand AI’s impact on the job market and adapt proactively.
Generative AI for Leaders
For business leaders, Generative AI for Leaders focuses on integrating AI into organisation’s strategy. AI’s rapid advancement offers both opportunities and challenges for business leaders, who must balance innovation with ethical and operational concerns.
In this course, participants learn strategies for building AI-driven business initiatives and fostering collaboration, and learn how to address compliance and ethical considerations.
With a practical look at AI trends, this course prepares leaders to develop a culture that supports AI adoption and equips them with the tools needed to make informed decisions.
Business Analyst: Digital Director for AI and Data Science
Business Analyst: Digital Director for AI and Data Science is a course designed for business analysts and professionals explaining how to define requirements for data science and artificial intelligence projects. The course covers the requirements elicitation process for AI applications and teaches participants how to work closely with data scientists and machine learning engineers to ensure that AI projects meet business goals.
Learners gain insights into conversational AI tools, the differences between Natural Language Understanding (NLU) bots and rule-based bots, and best practices in conversation flow analysis. For business analysts, the course provides essential skills to guide AI initiatives that deliver real business value.
Prompt Engineering+: Master Speaking to AI
One valuable course is Prompt Engineering+: Master Speaking to AI, which teaches the art of creating precise instructions for generative AI models. ‘Prompt engineering’ is essential for situations in which human intent must be accurately translated into AI output.
The course covers prompt structure, including one-shot, few-shot, and zero-shot learning, as well as fundamental skills like natural language processing and Python programming. Students work with leading models including ChatGPT, Google Gemini, and DALL-E, and learn practical methods to refine and test prompts, control model output, and tackle inaccuracies.
For those looking to work directly with generative AI, this course provides a foundational skill set to optimise AI interactions.
Canva AI: Master Canva AI Tools and Apps 2024
Content creators can benefit from Canva AI: Master Canva AI Tools and Apps 2024, a course focused on using Canva’s AI-driven tools to streamline and enhance content production. This course introduces participants to Canva’s Magic Studio, where they explore tools for creating engaging social media posts, PDFs, videos, and presentations. From text-to-image conversions to speaking avatars, the course delves into AI tools that help creators produce content efficiently.
Through hands-on projects, learners experience Canva AI’s capabilities, enabling them to produce a wide variety of content quickly and effectively—a valuable skill for social media, marketing, and creative professionals.
Conclusion
These courses offer a comprehensive toolkit for mastering AI skills in various fields. Embracing these opportunities can empower professionals to lead, create, and adapt in an AI-driven organisations. Whether you’re a business leader, a content creator, or a data professional, investing in AI skills prepares you to navigate the future with confidence and purpose.
See also: Understanding AI’s impact on the workforce
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI courses to boost your skills and stay ahead appeared first on AI News.
View the full article
OpenAI is facing diminishing returns with its latest AI model while navigating the pressures of recent investments.
According to The Information, OpenAI’s next AI model – codenamed Orion – is delivering smaller performance gains compared to its predecessors.
In employee testing, Orion reportedly achieved the performance level of GPT-4 after completing just 20% of its training. However, the transition from GPT-4 to the anticipated GPT-5 is said to exhibit smaller quality improvements than the leap from GPT-3 to GPT-4.
“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” stated employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”
Early stages of AI training usually yield the most significant improvements, while subsequent phases typically result in smaller performance gains. Consequently, the remaining 80% of training is unlikely to deliver advancements on par with previous generational improvements.
This situation with its latest AI model emerges at a pivotal time for OpenAI, following a recent funding round that saw the company raise $6.6 billion. With this financial backing comes increased expectations from investors, as well as technical challenges that complicate traditional scaling methodologies in AI development.
If these early versions do not meet expectations, OpenAI’s future fundraising prospects may not attract the same level of interest.
The limitations highlighted in the report underline a significant challenge confronting the entire AI industry: the diminishing availability of high-quality training data and the necessity to maintain relevance in an increasingly competitive field.
According to a paper (PDF) that was published in June, AI firms will deplete the pool of publicly available human-generated text data between 2026 and 2032. The Information notes that developers have “”largely squeezed as much out of” the data that has been used for enabling the rapid AI advancements we’ve seen in recent years.
To address these challenges, OpenAI is fundamentally rethinking its AI development strategy.
“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” explains The Information.
As OpenAI navigates these challenges, the company must balance innovation with practical application and investor expectations. However, the ongoing exodus of leading figures from the company won’t help matters.
(Photo by Jukan Tateisi)
See also: ASI Alliance launches AIRIS that ‘learns’ in Minecraft
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post OpenAI faces diminishing returns with latest AI model appeared first on AI News.
View the full article
The Tony Blair Institute (TBI) has examined AI’s impact on the workforce. The report outlines AI’s potential to reshape work environments, boost productivity, and create opportunities—while warning of potential challenges ahead.
“Technology has a long history of profoundly reshaping the world of work,” the report begins.
From the agricultural revolution to the digital age, each wave of innovation has redefined labour markets. Today, AI presents a seismic shift, advancing rapidly and prompting policymakers to prepare for change.
Economic opportunities
The TBI report estimates that AI, when fully adopted by *** firms, could significantly increase productivity. It suggests that AI could save “almost a quarter of private-sector workforce time,” equivalent to the annual output of 6 million workers.
Most of these time savings are expected to stem from AI-enabled software performing cognitive tasks such as data analysis and routine administrative operations.
The report identifies sectors reliant on routine cognitive tasks, such as banking and finance, as those with significant exposure to AI. However, sectors like skilled trades or construction – which involve complex manual tasks – are likely to see less direct impact.
While AI can result in initial job losses, it also has the potential to create new demand by fostering economic growth and new industries.
The report expects these job losses can be balanced by new job creation. Over the years, technology has historically spurred new employment opportunities, as innovation leads to the development of new products and services.
Shaping future generations
AI’s potential extends into education, where it could assist both teachers and students.
The report suggests that AI could help “raise educational attainment by around six percent” on average. By personalising and supporting learning, AI has the potential to equalise access to opportunities and improve the quality of the workforce over time.
Health and wellbeing
Beyond education, AI offers potential benefits in healthcare, supporting a healthier workforce and reducing ******** costs.
The report highlights AI’s role in speeding medical research, enabling preventive healthcare, and helping those with disabilities re-enter the workforce.
Workplace transformation
The report acknowledges potential workplace challenges, such as increased monitoring and stress from AI tools. It stresses the importance of managing these technologies thoughtfully to “deliver a more engaging, inclusive and safe working environment.”
To mitigate potential disruption, the TBI outlines recommendations. These include upgrading labour-market infrastructure and utilising AI for job matching.
The report suggests creating an “Early Awareness and Opportunity System” to help workers understand the impact of AI on their jobs and provide advice on career paths.
Preparing for an AI-powered future
In light of the uncertainties surrounding AI’s impact on the workforce, the TBI urges policy changes to maximise benefits. Recommendations include incentivising AI adoption across industries, developing AI-pathfinder programmes, and creating challenge prizes to address public-sector labour shortages.
The report concludes that while AI presents risks, the potential gains are too significant to ignore.
Policymakers are encouraged to adopt a “pro-innovation” stance while being attuned to the risks, fostering an economy that is dynamic and resilient.
(Photo by Mimi Thian)
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Understanding AI’s impact on the workforce appeared first on AI News.
View the full article
The intelligence displayed by generative AI chatbots like OpenAI’s ChatGPT has captured the imagination of individuals and corporations, and artificial intelligence has suddenly become the most exciting area of technology innovation.
AI has been recognised as a game changer, with potential to transform many aspects of our lives. From personalised medicine to autonomous vehicles, automated investments to digital assets, the possibilities enabled by AI seem endless.
But as transformational as AI will be, there are a lot of risks posed by this new technology. While fears about a malicious, Skynet-style AI system going rogue are misplaced, the dangers of AI centralisation are not. As companies like Microsoft, Google and Nvidia forge ahead in their pursuit of AI, fears about the concentration of power in the hands of just a few centralised players are becoming more pronounced.
Why should we worry about decentralised AI?
Monopoly power
The most pressing issue arising from centralised AI is the prospect of a few tech giants achieving monopolistic control over the industry. The big tech giants have already accumulated a very significant market share in AI, giving them possession of vast amounts of data. They also control the infrastructure that AI systems run on, enabling them to stifle their competitors, hobble innovation, and perpetuate economic inequality.
By achieving a monopoly over the development of AI, these companies are more likely to have an unfair influence on regulatory frameworks, which they can manipulate to their advantage. It will mean that smaller startups, which lack the enormous resources of big tech giants, will struggle to keep up with the pace of innovation. Those that do survive and look like they might thrive will almost certainly end up being acquired, further concentrating power in the hands of the few. The result will be less diversity in terms of AI development, fewer choices for consumers, and less favourable terms, limiting the use-cases and economic opportunities promised by AI.
Bias and Discrimination
Aside from monopolistic control, there are genuine fears around the bias of AI systems, and these concerns will take on more importance as society increasingly relies on AI.
The risk stems from the fact that organisations are becoming more reliant on automated systems to make decisions in many areas. It’s not unusual for a company to employ AI algorithms to filter job applicants, for example, and the risk is that a biased system could unfairly exclude a subset of candidates based on their ethnicity, age or location. AI is also used by insurance companies to set policy rates, by financial services firms to determine if someone qualifies for a loan and the amount of interest they’ll need to pay, and by law enforcement to determine which areas are more likely to see higher ******. In all of these use-cases, the potential implications of biased AI systems are extremely worrying.
Whether it’s law enforcement targeting ********* communities, discriminatory lending practices or something else, centralised AI can potentially exacerbate social inequality and enable systemic discrimination.
Privacy and surveillance
Another risk posed by centralised AI systems is the lack of privacy protections. When just a few big companies control the vast majority of data generated by AI, they gain the ability to carry out unprecedented surveillance on their users. The data accumulated by the most dominant AI platforms can be used to monitor, analyse and predict an individual’s behaviour with incredible accuracy, eroding privacy and increasing the potential for the information to be misused.
It’s of particular concern in countries with authoritarian governments, where data can be weaponised to create more sophisticated tools for monitoring citizens. But even in democratic societies, there is a threat posed by increased surveillance, as exemplified by the revelations of Edward Snowden about the US National Security Agency’s Prism program.
Corporations can also potentially misuse consumer’s data to increase their profits. In addition, when centralised entities accumulate vast amounts of sensitive data, this makes them more lucrative targets for hackers, increasing the risk of data leaks.
Security risks
Issues of national security can also arise due to centralised AI. For instance, there are justified fears that AI systems can be weaponised by nations, used to conduct cyberwarfare, engage in espionage, and develop new weapons systems. AI could become a key tool in future wars, raising the stakes in geopolitical conflicts.
AI systems themselves can also be targeted. As nations increase their reliance on AI, such systems will make for enticing targets, as they are obvious single points of ********. Take out an AI system and you could disrupt the entire traffic flow of cities, take down electrical grids, and more.
Ethics
The other major concern of centralised AI is about ethics. That’s because the handful of companies that control AI systems would gain substantial influence over a society’s cultural norms and values, and might often prioritise profit, creating further ethical concerns.
For example, AI algorithms are already being used widely by social media platforms to moderate content, in an attempt to identify and filter out offensive posts. The worry is that algorithms, either by accident or design, might end up suppressing free speech.
There is already controversy about the effectiveness of AI-powered moderation systems, with numerous seemingly innocuous posts being blocked or taken down by automated algorithms. This leads to speculation that such systems are not broken but being manipulated behind the scenes based on the political narrative the platform is trying to promote.
The alternative? Decentralised AI
The only logical counterweight to centralised AI is the development of decentralised AI systems that ensure that control of the technology ******** in the hands of the majority, rather than the few. By doing this, we can ensure that no single company or entity gains a significant influence over the direction of AI’s development.
When the development and governance of AI is shared by thousands or millions of entities, its progress will be more equitable, with greater alignment to the needs of the individual. The result will be more diverse AI applications, with an almost endless selection of models used by different systems, instead of a few models that dominate the industry.
Decentralised AI systems will also mean checks and balances against the risk of mass surveillance and manipulation of data. Whereas centralised AI can be weaponised and used in a way that’s contrary to the interests of the many, decentralised AI hedges against this kind of oppression.
The main advantage of decentralised AI is that everyone is in control over the technology’s evolution, preventing any single entity from gaining an outsized influence over its development.
How to decentralise AI
Decentralised AI involves a rethink of the layers that make up the AI technology stack, including elements like the infrastructure (compute and networking resources), the data, models, training, inference, and fine-tuning processes.
We can’t just put our hopes in open-source models if the underlying infrastructure ******** fully centralised by cloud computing giants like Amazon, Microsoft and Google, for instance. We need to ensure that every aspect of AI is decentralised
The best way to decentralise the AI stack is to break it down into modular components and create markets around them based on supply and demand. One such example of how this can work is Spheron, which has created a Decentralised Physical Infrastructure Network (DePIN) that anyone can participate in.
With Spheron’s DePIN, everyone is free to share their underutilised computing resources, essentially renting them out to those who need infrastructure to host their AI applications. So, a graphic designer who uses a powerful laptop with a GPU can donate processing power to the DePIN when they’re not using it for their own work, and be rewarded with token incentives.
What this means is that the AI infrastructure layer becomes widely distributed and decentralised, with no single provider in control. It’s enabled by blockchain technology and smart contracts, which provide transparency, immutability and automation.
DePIN can also work for open-source models and underlying data. For instance, it’s possible to share training datasets on a decentralised network like Qubic, which will make sure the provider of that data is rewarded each time their information is accessed by an AI system.
To ensure access and permissions are decentralised, every part of the technology stack is distributed in this way. However, the AI industry currently struggles to provide such a level of decentralisation. Although open-source models have become extremely popular among AI developers, most people continue to rely on proprietary cloud networks, meaning the training and inference processes are heavily centralised.
But there are strong incentives for decentralisation to win out. One of the primary advantages of DePIN networks, for example, is that they help to reduce overheads. Because networks like Spheron don’t rely on intermediaires, participants don’t need to make any payments or share revenue with third-parties. Moreover, they can afford to be more competitive in terms of pricing than corporations that are under pressure to grow profitability.
Decentralisation must win
The future of AI holds a lot of potential, but it’s also perilous. While the capabilities of AI systems have improved dramatically in the last few years, most of the advances have been made by all-powerful companies and that has resulted in an increase in their influence over the industry. There’s a price to pay for this, not just in monetary terms.
The only reasonable alternative is to promote the greater adoption of decentralised AI, which can enhance accessibility and ensure a greater flexibility of AI. By allowing everyone to participate in the development of AI on an equal footing, we’ll see more diverse, interesting, and useful applications that can benefit everyone equally, as well as putting their users first.
Building a decentralised AI future will involve a great deal of coordination and collaboration across every layer of the AI stack. Fortunately, there are strong incentives for participants to do just that. And again, the incentives are not just monetary.
The post Centralised AI is dangerous: how can we stop it? appeared first on AI News.
View the full article
The ASI Alliance has introduced AIRIS (Autonomous Intelligent Reinforcement Inferred Symbolism) that “learns” within the popular game, Minecraft.
AIRIS represents the first proto-AGI (Artificial General Intelligence) to harness a comprehensive tech stack across the alliance.
SingularityNET, founded by renowned AI researcher Dr Ben Goertzel, uses agent technology from Fetch.ai, incorporates Ocean Data for long-term memory capabilities, and is soon expected to integrate CUDOS Compute infrastructure for scalable processing power.
“AIRIS is a significant step in the direction of practical, scalable neural-symbolic learning, and – alongside its already powerful and valuable functionality – it illustrates several general points about neural-symbolic systems, such as their ability to learn precise generalisable conclusions from small amounts of data,” explains Goertzel.
According to the company, this alliance-driven procedure propels AIRIS towards AGI—crafting one of the first intelligent systems with autonomous and adaptive learning that holds practical applications for real-world scenarios.
AIRIS’ learning mechanisms
AIRIS is crafted to enhance its understanding by interacting directly with its environment, venturing beyond the traditional AI limitations that depend on predefined rules or vast datasets. Instead, AIRIS evolves through observation, experimentation, and continual refinement of its unique “rule set.”
This system facilitates a profound level of problem-solving and contextual comprehension, with its implementation in Minecraft setting a new benchmark for AI interaction with both digital and tangible landscapes.
pic.twitter.com/jTeQFulzFJ — Artificial Superintelligence Alliance (@ASI_Alliance) November 5, 2024
Shifting from a controlled 2D grid to the sophisticated 3D world of Minecraft, AIRIS faced numerous challenges—including terrain navigation and adaptive problem-solving in a dynamic environment. This transition underscores AIRIS’ autonomy in navigation, exploration, and learning.
The AIRIS Minecraft Agent distinguishes itself from other AI entities through several key features:
Dynamic navigation: AIRIS initially evaluates its milieu to formulate movement strategies, adapting to new environments in real-time. Its capabilities include manoeuvring around obstacles, jumping over barriers, and anticipating reactions to varied terrains.
Obstacle adaptation: It learns to navigate around impediments like cliffs and forested areas, refining its rule set with every new challenge to avoid redundant errors and minimise needless trial-and-error efforts.
Efficient pathfinding: Via continuous optimisation, AIRIS advances from initially complex navigation paths to streamlined, direct routes as it “comprehends” Minecraft dynamics.
Real-time environmental adaptation: Contrasting with conventional reinforcement learning systems that demand extensive retraining for new environments, AIRIS adapts immediately to unfamiliar regions, crafting new rules based on partial observations dynamically.
AIRIS’ adeptness in dealing with fluctuating terrains, including water bodies and ***** systems, introduces sophisticated rule refinement founded on hands-on experience. Additionally, AIRIS boasts optimised computational efficiency—enabling real-time management of complex rules without performance compromises.
Future applications
Minecraft serves as an excellent launchpad for AIRIS’ prospective applications, establishing a solid foundation for expansive implementations:
Enhanced object interaction: Forthcoming stages will empower AIRIS to engage more profoundly with its surroundings, improving capabilities in object manipulation, construction, and even crafting. This development will necessitate AIRIS to develop a more refined decision-making framework for contextual tasks.
Social AI collaboration: Plans are underway to incorporate AIRIS in multi-agent scenarios, where agents learn, interact, and fulfil shared objectives, simulating real-world social dynamics and problem-solving collaboratively.
Abstract and strategic reasoning: Expanded developments will enhance AIRIS’s reasoning, enabling it to tackle complex goals such as resource management and prioritisation, moving beyond basic navigation towards strategic gameplay.
The transition of AIRIS to 3D environments signifies a pivotal advancement in the ASI Alliance’s mission to cultivate AGI. Through AIRIS’s achievements in navigating and learning within Minecraft, the ASI Alliance aspires to expedite its deployment in the real world, pioneering applications for autonomous robots, intelligent home assistants, and other systems requiring adaptive learning and problem-solving capacities.
Berick Cook, AI Developer at SingularityNET and creator of AIRIS, said: “AIRIS is a whole new way of approaching the problem of machine learning. We are only just beginning to explore its capabilities. We are excited to see how we can apply it to problems that have posed a significant challenge for traditional reinforcement learning.
“The most important aspect of AIRIS to me is its transparency and explainability. Moving away from ‘****** Box’ AI represents a significant leap forward in the pursuit of safe, ethical, and beneficial AI.”
The innovative approach to AI evident in AIRIS – emphasising self-directed learning and continuous rule refinement – lays the foundation for AI systems capable of independent functioning in unpredictable real-world environments. Minecraft’s intricate ecosystem enables the system to hone its skills within a controlled yet expansive virtual setting, effectively bridging the divide between simulation and reality.
The AIRIS Minecraft Agent represents the inaugural tangible step towards an AI that learns from, adapts to and makes autonomous decisions about its environment. This accomplishment illustrates the potential of such technology to re-envision AI’s role across various industries.
(Image by SkyeWeste)
See also: SingularityNET bets on supercomputer network to deliver AGI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post ASI Alliance launches AIRIS that ‘learns’ in Minecraft appeared first on AI News.
View the full article
The combination of artificial intelligence and policymaking can occasionally have unforeseen repercussions, as seen recently in Alaska.
In an unusual turn of events, Alaska legislators reportedly used AI-generated citations that were inaccurate to justify a proposed policy banning cellphones in schools. As reported by /The Alaska Beacon/, Alaska’s Department of Education and Early Development (DEED) presented a policy draft containing references to academic studies that simply did not exist.
The situation arose when Alaska’s Education Commissioner, Deena Bishop, used generative AI to draft the cellphone policy. The document produced by the AI included supposed scholarly references that were neither verified nor accurate, yet the document did not disclose the use of AI in its preparation. Some of the AI-generated content reached the Alaska State Board of Education and Early Development before it could be reviewed, potentially influencing board discussions.
Commissioner Bishop later claimed that AI was used only to “create citations” for an initial draft and asserted that she corrected the errors before the meeting by sending updated citations to board members. However, AI “hallucinations”—fabricated information generated when AI attempts to create plausible yet unverified content—were still present in the final document that was voted on by the board.
The final resolution, published on DEED’s website, directs the department to establish a model policy for cellphone restrictions in schools. Unfortunately, the document included six citations, four of which seemed to be from respected scientific journals. However, the references were entirely made up, with URLs that led to unrelated content. The incident shows the risks of using AI-generated data without proper human verification, especially when making policy rulings.
Alaska’s case is not one of a kind. AI hallucinations are increasingly common in a variety of professional sectors. For example, some legal professionals have faced consequences for using AI-generated, fictitious case citations in court. Similarly, academic papers created using AI have included distorted data and fake sources, presenting serious credibility concerns. When left unchecked, generative AI algorithms, which are meant to produce content based on patterns rather than factual accuracy, can easily produce misleading citations.
The reliance on AI-generated data in policymaking, particularly in education, carries significant risks. When policies are developed based on fabricated information, they may misallocate resources and potentially harm students. For instance, a policy restricting cellphone use based on fabricated data may divert attention from more effective, evidence-based interventions that could genuinely benefit students.
Furthermore, using unverified AI data can erode public trust in both the policymaking process and AI technology itself. Such incidents underscore the importance of fact-checking, transparency, and caution when using AI in sensitive decision-making areas, especially in education, where impact on students can be profound.
Alaska officials attempted to downplay the situation, referring to the fabricated citations as “placeholders” intended for later correction. However, the document with the “placeholders” was still presented to the board and used as the basis for a vote, underscoring the need for rigorous oversight when using AI.
(Photo by Hartono Creative Studio)
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI hallucinations gone wrong as Alaska uses fake stats in policy appeared first on AI News.
View the full article
A global study has unveiled a stark contrast in attitudes towards embracing in-vehicle AI between Eastern and Western markets, with ********* drivers particularly reluctant.
The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the ***, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding.
According to the study, while AI is becoming integral to modern vehicles, ********* consumers remain hesitant about its implementation and value proposition.
Regional disparities
The study found that 48 percent of ******** respondents view in-car AI predominantly as an opportunity, while merely 23 percent of ********* respondents share this optimistic outlook. In Europe, 39 percent believe AI’s opportunities and risks are broadly balanced, while 24 percent take a negative stance, suggesting the risks outweigh potential benefits.
Understanding of AI technology also varies significantly by region. While over 80 percent of ******** respondents claim to understand AI’s use in cars, this figure drops to just 54 percent among ********* drivers, highlighting a notable knowledge gap.
Marcus Willand, Partner at MHP and one of the study’s authors, notes: “The figures show that the prospect of greater safety and comfort due to AI can motivate purchasing decisions. However, the ********* respondents in particular are often hesitant and price-sensitive.”
The willingness to pay for AI features shows an equally stark divide. Just 23 percent of ********* drivers expressed willingness to pay for AI functions, compared to 39 percent of ******** drivers. The study suggests that most users now expect AI features to be standard rather than optional extras.
Dr Nils Schaupensteiner, Associated Partner at MHP and study co-author, said: “Automotive companies need to create innovations with clear added value and develop both direct and indirect monetisation of their AI offerings, for example through data-based business models and improved services.”
In-vehicle AI opportunities
Despite these challenges, traditional automotive manufacturers maintain a trust advantage over tech giants. The study reveals that 64 percent of customers trust established car manufacturers with AI implementation, compared to 50 percent for technology firms like Apple, Google, and Microsoft.
The research identified several key areas where AI could provide significant value across the automotive industry’s value chain, including pattern recognition for quality management, enhanced data management capabilities, AI-driven decision-making systems, and improved customer service through AI-powered communication tools.
“It is worth OEMs and suppliers considering the opportunities offered by the new technology along their entire value chain,” explains Augustin Friedel, Senior Manager and study co-author. “However, the possible uses are diverse and implementation is quite complex.”
The study reveals that while up to 79 percent of respondents express interest in AI-powered features such as driver assistance systems, intelligent route planning, and predictive maintenance, manufacturers face significant challenges in monetising these capabilities, particularly in the ********* market.
See also: MIT breakthrough could transform ****** training
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.
View the full article
GitHub is marking a significant evolution of its developer tools with two major announcements: a multi-model upgrade to Copilot and the introduction of Spark, a natural language web development platform.
The code hosting giant is expanding Copilot beyond its OpenAI-only architecture to include Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.5 Pro models while simultaneously launching GitHub Spark to simplify web application development through natural language instructions.
The expansion includes additional integration news as Copilot arrives on Apple’s Xcode in public preview. Since Copilot’s initial release, this three-pronged update represents GitHub’s most ambitious AI toolkit expansion.
Enhanced model support for Copilot
GitHub Copilot has long leveraged different large language models (LLMs) for various use cases. The first public version of Copilot was launched using Codex, an early version of OpenAI GPT-3, specifically fine-tuned for coding tasks. Copilot Chat was launched in 2023 with GPT-3.5 and later GPT-4.
“Since then, we have updated the base model versions multiple times, using a range from GPT 3.5-turbo to GPT 4o and 4o-mini models for different latency and quality requirements,” Thomas Dohmke, GitHub’s CEO, said in a blog write-up.
GitHub’s decision to extend Copilot’s model support beyond OpenAI’s technology represents a notable evolution in its AI-assisted development strategy. The integration now includes models from industry leaders Anthropic and Google, alongside OpenAI, providing developers access to a broader range of AI capabilities.
The multi-model approach suggests GitHub’s commitment to offering developers more choice and flexibility in their AI-assisted coding journey.
“We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choices across many of GitHub Copilot’s surface areas and functions soon,” the CEO said.
GitHub’s expansion of model support is particularly significant as it allows developers to leverage different AI models’ strengths for various coding tasks. Each AI provider brings its own capabilities and specialisations, which could lead to more accurate and context-aware code suggestions across different programming languages and frameworks.
GitHub Spark: Natural language web development
However, the company’s most intriguing announcement is the introduction of GitHub Spark, a new tool designed to revolutionise web application development and achieve GitHub’s vision of reaching one billion developers. Spark enables developers to build web applications using natural language instructions, potentially reducing the complexity and time required for web development projects.
It all starts with a spark of inspiration Bring your ideas to life and build applications entirely in natural language. Sign up for the technical preview of GitHub Spark: [Hidden Content] pic.twitter.com/gs3AlNlNw6 — GitHub (@github) November 2, 2024
This natural language approach to web development could particularly benefit:
Developers looking to rapidly prototype applications,
Teams seeking to bridge the gap between design concepts and implementation,
Organisations aiming to accelerate their web development lifecycle.
Copilot arrives on Apple’s Xcode
The recent release of Copilot for Apple’s Xcode in public preview form represents another milestone in GitHub’s expansion strategy. The integration brings AI-powered coding assistance to Apple’s integrated development environment, offering several advantages for iOS and macOS developers:
Real-time code suggestions within the Xcode environment,
Improved productivity for Swift and Objective-C development,
Integration with existing Xcode workflows.
Impact on developer productivity
These developments in GitHub’s AI toolkit reflect a broader industry trend towards more intelligent and automated development tools. The combination of expanded model support, natural language web development, and Xcode integration suggest a comprehensive approach to enhancing developer productivity across different platforms and development scenarios.
For development teams, the updates could mean:
Reduced time spent on boilerplate code,
More efficient problem-solving through AI-assisted development,
Greater flexibility in choosing AI models that best suit specific project needs.
Perhaps, as these tools become available to more developers, we may see shifts in how teams approach software development:
Increased adoption of AI-assisted coding practices,
Evolution of development workflows to incorporate natural language instructions,
Greater emphasis on high-level problem-solving rather than implementation details.
Advancements in GitHub’s AI capabilities mark a step forward in the evolution of development tools. While maintaining the fundamental aspects of software development, the new features and integrations aim to enhance rather than replace developer expertise, potentially leading to more efficient and productive development processes.
The combination of expanded AI model support, GitHub Spark’s introduction, and Xcode integration represents GitHub’s holistic approach to modernising software development. As the tools mature and developers incorporate them into their workflows, we may see significant changes in how software is conceived, developed, and deployed.
(Photo by Github)
See also: GitHub CEO: The EU ‘will define how the world regulates AI’
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post GitHub expands AI suite: Copilot goes multi-model, Spark debuts for web dev appeared first on AI News.
View the full article
Google has big hopes for AI, as evidenced by the consistent improvements to its Gemini chatbot in recent months.
Google briefly introduced its vision for a “universal AI agent” aimed to help users with daily tasks at the I/O developer conference in May, hinting that elements of the technology could be incorporated into Gemini soon. Recent insights from The Information have shed more light on its initiative, known internally as Project Jarvis.
Project Jarvis represents a major advancement in AI for Google. Unlike traditional voice assistants that respond to user commands, Jarvis is designed to perform tasks autonomously, navigate the web, and make independent decisions. For instance, Jarvis could manage emails, conduct research, and even schedule appointments, reducing the cognitive load involved in managing digital tasks.
Jarvis’s core objective is to revolutionise how users interact with their devices. Rather than serving as a passive tool awaiting commands, Jarvis would actively engage in real-time task management, positioning it as an AI partner rather than a utility.
For legal professionals, Jarvis could review large volumes of case documents and organise them by relevance, streamlining workflow. Similarly, marketers could use Jarvis to integrate data from numerous sources, allowing them to focus more on strategy and less on administrative work.
The evolution of AI agents such as Jarvis may have an impact on specific job roles. Tasks formerly performed by entry-level administrative personnel may come within the capabilities of AI assistants. However, the shift is likely to generate opportunities in roles that require critical thinking, creativity, and emotional intelligence—qualities not easily replicated by AI.
Industry observers anticipate a shift toward higher-value work, with people concentrating less on routine tasks and more on areas that promote innovation and strategic decision-making.
Privacy and security considerations
Project Jarvis raises significant privacy and security issues due to its ability to access sensitive information such as emails and documents. To prevent unauthorised access, Google will most likely deploy enhanced encryption, strict user restrictions, and, maybe, multi-factor authentication. Cybersecurity will also be essential to keep Jarvis secure from external threats.
Surveys indicate that, while AI holds considerable appeal, privacy ******** a top concern for many users. Experts recommend measures such as a transparent privacy dashboard that enables users to monitor and control Jarvis’s access to data. To build trust and drive the adoption of AI agents like Jarvis, Google will need to strike a balance between convenience and robust privacy protections.
Enhancing user experience and accessibility
Beyond productivity, Jarvis has the potential to improve accessibility for a wide range of users. For those with disabilities, Jarvis could read web content aloud or use voice commands to assist with form navigation. For less tech-savvy users, Jarvis could simplify digital interactions by handling tasks like locating files or managing settings.
Jarvis could also assist in planning a busy workday or booking a trip by actively supporting task management. Project Jarvis aims to reimagine AI as a supportive digital partner, enhancing the user experience beyond that of a conventional tool.
(Photo by Igor Bumba)
See also: Google advances mobile AI in Pixel 9 smartphones
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Project Jarvis ***** reveals Google’s vision for next-gen Gemini appeared first on AI News.
View the full article
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.
As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.
Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.
Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The *** AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.
In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.
The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.
Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.
Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.
Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.
In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.
Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models.
While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.
Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.
By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective ******** clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.
(Image Credit: Anthropic)
See also: President Biden issues first National Security Memorandum on AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Anthropic urges AI regulation to avoid catastrophes appeared first on AI News.
View the full article
The Open Source Initiative (OSI) has unveiled a definition framework to evaluate whether AI systems can be classified as open-source.
The announcement of the first Open Source AI Definition (OSAID) was made at All Things Open and marks the culmination of a comprehensive global effort spanning multiple years of research, international workshops, and a year-long community design process.
The OSI – widely recognised as the definitive authority on open-source definitions by individuals, organisations, and government bodies worldwide – developed the framework through extensive collaboration with industry stakeholders. This framework defines what open-source AI means, insisting that the same open-source requirements apply whether to a fully functional AI system, a model, weights and parameters, or other structural elements.
An open-source AI system must be made available under terms that grant four essential freedoms:
Use the system for any purpose and without having to ask for permission.
Study how the system works and inspect its components.
Modify the system for any purpose, including to change its output.
Share the system for others to use with or without modifications, for any purpose.
These freedoms apply both to a fully functional system and to discrete elements of a system. A precondition to exercising these freedoms is having access to the preferred form to make modifications to the system, which includes detailed data information, complete source code, and model parameters.
“The co-design process that led to version 1.0 of the Open Source AI Definition was well-developed, thorough, inclusive, and fair,” said Carlo Piana, OSI board chair. “The board is confident that the process has resulted in a definition that meets the standards of open-source as defined in the open-source definition and the four essential freedoms.”
One of the framework’s most significant requirements is the mandate for open-source models to provide sufficient information about their training data, ensuring that “a skilled person can recreate a substantially equivalent system using the same or similar data,” according to Ayah Bdeir, who leads AI strategy at Mozilla.
Bdeir acknowledged that whilst this approach might not be perfect, it represents a practical compromise between ideological purity and real-world implementation. She suggested that demanding an unrealistically high standard could prove counterproductive to the initiative’s goals.
The Digital Public Goods Alliance (DPGA) has expressed support for the OSI’s leadership in defining open-source AI. Liv Marte Nordhaug, CEO of the DPGA secretariat, confirmed that her organisation will incorporate this foundational work into updates to their Digital Public Goods Standard for AI applications.
EleutherAI Institute, known for its non-profit work in AI development, has also endorsed the definition.
“The Open Source AI Definition is a necessary step towards promoting the benefits of open-source principles in the field of AI,” stated Stella Biderman, Executive Director of the EleutherAI Institute. “We believe that this definition supports the needs of independent machine learning researchers and promotes greater transparency among the largest AI developers.”
The definition highlights the importance of including data information and code when sharing open-source models and weights. These requirements ensure transparency and the ability to modify the AI system.
OSI Executive Director Stefano Maffulli acknowledged the challenges faced during the development process, noting that despite occasional heated exchanges and differing opinions, the final result aligned with the project’s initial objectives.
“This is a starting point for a continued effort to engage with the communities to improve the definition over time,” he stated.
The OSAID does not require a specific legal mechanism for assuring that model parameters are freely available to all, though it may involve licences or legal instruments. This aspect is expected to become clearer over time as the legal system addresses these open-source AI systems.
See also: President Biden issues first National Security Memorandum on AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Industry leaders back open-source AI definition appeared first on AI News.
View the full article
MIT researchers have developed a ****** training method that reduces time and cost while improving adaptability to new tasks and environments.
The approach – called Heterogeneous Pretrained Transformers (HPT) – combines vast amounts of diverse data from multiple sources into a unified system, effectively creating a shared language that generative AI models can process. This method marks a significant departure from traditional ****** training, where engineers typically collect specific data for individual robots and tasks in controlled environments.
Lead researcher Lirui Wang – an electrical engineering and computer science graduate student at MIT – believes that while many cite insufficient training data as a key challenge in robotics, a ******* issue ***** in the vast array of different domains, modalities, and ****** hardware. Their work demonstrates how to effectively combine and utilise all these diverse elements.
The research team developed an architecture that unifies various data types, including camera images, language instructions, and depth maps. HPT utilises a transformer model, similar to those powering advanced language models, to process visual and proprioceptive inputs.
In practical tests, the system demonstrated remarkable results—outperforming traditional training methods by more than 20 per cent in both simulated and real-world scenarios. This improvement held true even when robots encountered tasks significantly different from their training data.
The researchers assembled an impressive dataset for pretraining, comprising 52 datasets with over 200,000 ****** trajectories across four categories. This approach allows robots to learn from a wealth of experiences, including human demonstrations and simulations.
One of the system’s key innovations ***** in its handling of proprioception (the ******’s awareness of its position and movement.) The team designed the architecture to place equal importance on proprioception and vision, enabling more sophisticated dexterous motions.
Looking ahead, the team aims to enhance HPT’s capabilities to process unlabelled data, similar to advanced language models. Their ultimate vision involves creating a universal ****** brain that could be downloaded and used for any ****** without additional training.
While acknowledging they are in the early stages, the team ******** optimistic that scaling could lead to breakthrough developments in robotic policies, similar to the advances seen in large language models.
You can find a copy of the researchers’ paper here (PDF)
(Photo by Possessed Photography)
See also: Jailbreaking AI robots: Researchers sound alarm over security flaws
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post MIT breakthrough could transform ****** training appeared first on AI News.
View the full article
President Biden has issued the US’ first-ever National Security Memorandum (NSM) on AI, addressing how the nation approaches the technology from a security perspective.
The memorandum, which builds upon Biden’s earlier executive order on AI, is founded on the premise that cutting-edge AI developments will substantially impact national security and foreign policy in the immediate future.
Security experts suggest the implications are already being felt. “AI already has implications for national security, as we know that more and more attackers are using AI to create higher volume and more complex attacks, especially in the social engineering and misinformation fronts,” says Melissa Ruzzi, Director of AI at AppOmni.
At its core, the NSM outlines three primary objectives: establishing US leadership in safe AI development, leveraging AI technologies for national security, and fostering international governance frameworks.
“Our competitors want to upend US AI leadership and have employed economic and technological espionage in efforts to steal US technology,” the memorandum states, elevating the protection of ********* AI innovations to a “top-tier intelligence priority.”
The document formally designates the AI Safety Institute as the primary governmental point of contact for the AI industry. This institute will be staffed with technical experts and will maintain close partnerships with national security agencies, including the intelligence community, Department of Defence, and Department of Energy.
“The actions listed in the memo are great starting points to get a good picture of the status quo and obtain enough information to make decisions based on data, instead of jumping to conclusions to make decisions based on vague assumptions,” Ruzzi explains.
However, Ruzzi cautions that “the data that needs to be collected on the actions is not trivial, and even with the data, assumptions and trade-offs will be necessary for final decision making. Making decisions after data gathering is where the big challenge will be.”
In a notable move to democratise AI research, the memorandum reinforces support for the National AI Research Resource pilot programme. This initiative aims to extend AI research capabilities beyond major tech firms to universities, civil society organisations, and small businesses.
The NSM introduces the Framework to Advance AI Governance and Risk Management in National Security (PDF), which establishes comprehensive guidelines for implementing AI in national security applications. These guidelines mandate rigorous risk assessment procedures and safeguards against privacy invasions, bias, discrimination, and human rights violations.
Security considerations feature prominently in the framework, with Ruzzi emphasising their importance: “Cybersecurity of AI is crucial – we know that if AI is misconfigured, it can pose risks similar to misconfigurations in SaaS applications that cause confidential data to be exposed.”
On the international front, the memorandum builds upon recent diplomatic achievements, including the G7’s International Code of Conduct on AI and agreements reached at the Bletchley and Seoul AI Safety Summits. Notably, 56 nations have endorsed the US-led Political Declaration on the Military Use of AI and Autonomy.
The Biden administration has also secured a diplomatic victory with the passage of the first UN General Assembly Resolution on AI, which garnered unanimous support, including co-sponsorship from China.
The memorandum emphasises the critical role of semiconductor manufacturing in AI development, connecting to Biden’s earlier CHIPS Act. It directs actions to enhance chip supply chain security and diversity, ensuring ********* leadership in advanced computing infrastructure.
This latest initiative forms part of the Biden-Harris Administration’s broader strategy for responsible innovation in the AI sector, reinforcing America’s commitment to maintaining technological leadership while upholding democratic values and human rights.
(Photo by Nils Huenerfuerst)
See also: EU AI Act: Early prep could give businesses competitive edge
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post President Biden issues first National Security Memorandum on AI appeared first on AI News.
View the full article
Influencer partnerships can be great for brands looking to pump out content that promotes their products and services in an authentic way. These types of engagements can yield significant brand awareness and brand sentiment lift, but they can be risky too. Social media stars are unpredictable at the best of times, with many deliberately chasing controversy to increase their fame.
These antics don’t always reflect well on the brands that collaborate with especially attention-hungry influencers, leaving marketers no choice but to conduct careful due diligence on the individuals they work with. Luckily, that task can be made much easier thanks to the evolving utility of AI.
Lightricks, a software company best known for its AI-powered video and image editing tools, is once again expanding the AI capabilities of its suite with this week’s announcement of SafeCollab. An AI-powered influencer vetting module that lives within the company’s Popular Pays creator collaboration platform, SafeCollab is a new tool for marketers that automates the vetting process.
Traditionally, marketers have had no choice but to spend hours researching the backgrounds of influencers, looking through years’ worth of video uploads and social media posts. It’s a lengthy, manual process that can only be automated with intelligent tools.
SafeCollab provides that intelligence with its underlying large language models, which do the job of investigating influencers to ensure the image they portray is consistent with brand values. The LLMs perform what amounts to a risk assessment of creators’ content across multiple social media channels in minutes, searching through hours of videos, audio uploads, images and text.
In doing this, SafeCollab significantly reduces the time it takes for brand marketers to perform due diligence on the social media influencers they’re considering partnering with. Likewise, when creators opt in to SafeCollab, they make it easier for marketers to understand the brand safety implications of working together, reducing friction from campaign lifecycles.
Brands can’t take chances
The idea here is to empower brand marketers to avoid working with creators whose content is not aligned with the brand’s values – as well as those who have a tendency to kick up a storm.
Such due diligence is vital, for even the most innocuous influencers can have some skeletons in their closets. A case in point is the popular lifestyle influencer Brooke Schofield, who has more than 2.2 million followers on TikTok and co-hosts the “Canceled” podcast on YouTube. With her large following, good looks and keen sense of fashion, Schofield looked like a great fit for the clothing brand Boys Lie, which collaborated with her on an exclusive capsule collection called “Bless His Heart.”
However, Boys Lie quickly came to regret its collaboration with Schofield when a scandal erupted in April after fans unearthed a number of years-old social media posts where she expressed ******* views.
The posts, which were uploaded on X between 2012 and 2015 when Schofield was a teenager, contained a string of ******* profanities and insulting jokes about ****** people’s hairstyles. In one post, she vigorously defended George Zimmerman, a white ********* who was controversially acquitted of the ******* of the ****** teenager Trayvon Martin.
Schofield apologized profusely for her posts, admitting that they were “very hurtful” while stressing that she’s a changed person, having had time to “learn and grow and formulate my own opinions.”
However, Boys Lie decided it had no option but to drop its association with Schofield. After a statement on Instagram saying it’s “working on a solution,” the company followed by quietly withdrawing the clothing collection they had previously collaborated on.
Accelerating due diligence
If the marketing team at Boys Lie had access to a tool like SafeCollab, they likely would have uncovered Schofield’s controversial posts long before commissioning the collaboration. The tool, which is a part of Lightricks’ influencer marketing platform Popular Pays, is all about helping brands to automate their due diligence processes when working with social media creators.
By analyzing years of creators’ histories of posts across platforms like Instagram, TikTok, and YouTube, it can check everything they’ve posted online to make sure there’s nothing that might reflect badly on a brand.
Brands can define their risk parameters, and the tool will quickly generate an accurate risk assessment evaluation, so they can confidently choose the influencers they want to work with, safe in the knowledge that their partnerships are unlikely to spark any backlash.
Without a platform like SafeCollab, the task of performing all of this due diligence falls on the shoulders of marketers, and that means spending hours trawling through each influencer’s profiles, checking everything and anything they’ve ever said or done to ensure there’s nothing in their past that the brand would rather not be associated with.
When we consider that the scope of work might include audio voiceovers, extensive comment threads and frame-by-frame analyses of video content, it’s a painstaking process that never really ends. After all, the top influencers have a habit of churning out fresh content every day. Careful marketers have no choice but to continuously monitor what they’re posting.
Beyond initial history scans, SafeCollab’s real-time monitoring algorithms assume full responsibility, generating instant alerts to any problematic content, such as posts that contain graphic language, inappropriate images, promote ********* or ***** and alcohol use, mention *********, or whatever else the brand deems to be unsavory.
AI’s expanding applications
With the launch of SafeCollab, Lightricks is demonstrating yet another use case for generative AI. The company first made a name for itself as a developer of AI-powered video and image editing apps, including Photoleap, Facetune and Videoleap.
The latter app incorporates AI-powered video filters and text-to-video generative AI functionalities. It also boasts an AI Effects feature, where users can apply specialized AI art styles to achieve the desired vibe for each video they create.
Lightricks is also the company behind LTX Studio, which is a comprehensive platform that helps advertising production firms and filmmakers to create storyboards and asset-rich pitch decks for their video projects using text-to-video generative AI.
With all of Lightricks’ AI apps, the primary benefit is that they save users time by automating manual work and bringing creative visions to life, and SafeCollab is a great example of that. By automating the due diligence process from start to finish, marketers can quickly identify controversial influencers they’d rather steer clear of, without spending hours conducting exhaustive research.
The post AI is helping brands avoid controversial influencer partnerships appeared first on AI News.
View the full article
Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.