Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    205
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. Stability AI has announced the release of Stable Diffusion 3.5, marking a leap forward in open-source AI image generation models. The latest models from Stability AI include multiple variants designed to cater to different user needs, from hobbyists to enterprise-level applications. The announcement follows June’s Stable Diffusion 3 Medium release, which the company acknowledges didn’t meet expectations. “This release didn’t fully meet our standards or our communities’ expectations,” Stability AI stated. Rather than rushing a quick fix, Stability AI says it invested time in developing a more robust solution. The flagship model, Stable Diffusion 3.5 Large, boasts 8 billion parameters and operates at 1 megapixel resolution—making it the most powerful in the Stable Diffusion family. Alongside it, the Large Turbo variant offers comparable quality but generates images in just four steps, significantly reducing processing time. A Medium version, scheduled for release on 29th October, will feature 2.5 billion parameters and support image generation between 0.25 and 2 megapixel resolution. This variant is specifically optimised for consumer hardware. The models incorporate Query-Key Normalisation in transformer blocks, enhancing training stability and simplifying fine-tuning processes. However, this flexibility comes with trade-offs, including greater variation in outputs from identical prompts with different seeds. Stability AI has implemented a notably permissive community licence for the release. The models are free for non-commercial use and available to businesses with annual revenues under $1 million. Enterprises exceeding this threshold must secure separate licensing arrangements. The company emphasised its commitment to responsible AI development, implementing safety measures from the early stages. Additional features, including ControlNets for advanced control features, are planned for release following the Medium model’s launch. Stability AI’s latest image generation models are currently available via Hugging Face and GitHub, with additional access through platforms including the Stability AI API, Replicate, ComfyUI, and DeepInfra. (Image Credit: Stability AI) See also: Anthropic unveils new Claude AI models and ‘computer control’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Stability AI releases most powerful image generation models to date appeared first on AI News. View the full article
  2. Anthropic has announced upgrades to its AI portfolio, including an enhanced Claude 3.5 Sonnet model and the introduction of Claude 3.5 Haiku, alongside a “computer control” feature in public beta. The upgraded Claude 3.5 Sonnet demonstrates substantial improvements across all metrics, with particularly notable advances in coding capabilities. The model achieved an impressive 49.0% on the SWE-bench Verified benchmark, surpassing all publicly available models, including OpenAI’s offerings and specialist coding systems. In a pioneering development, Anthropic has introduced computer use functionality that enables Claude to interact with computers similarly to humans: viewing screens, controlling cursors, clicking, and typing. This capability, currently in public beta, marks Claude 3.5 Sonnet as the first frontier AI model to offer such functionality. Several major technology firms have already begun implementing these new capabilities. “The upgraded Claude 3.5 Sonnet represents a significant leap for AI-powered coding,” reports GitLab, which noted up to 10% stronger reasoning across use cases without additional latency. The new Claude 3.5 Haiku model, set for release later this month, matches the performance of the previous Claude 3 Opus whilst maintaining cost-effectiveness and speed. It notably achieved 40.6% on SWE-bench Verified, outperforming many competitive models including the original Claude 3.5 Sonnet and GPT-4o. (Credit: Anthropic) Regarding computer control capabilities, Anthropic has taken a measured approach, acknowledging current limitations whilst highlighting potential. On the OSWorld benchmark, which evaluates computer interface navigation, Claude 3.5 Sonnet achieved 14.9% in screenshot-only tests, significantly outperforming the next-best system’s 7.8%. The developments have undergone rigorous safety evaluations, with pre-deployment testing conducted in partnership with both the US and *** AI Safety Institutes. Anthropic maintains that the ASL-2 Standard, as detailed in their Responsible Scaling Policy, ******** appropriate for these models. (Image Credit: Anthropic) See also: IBM unveils Granite 3.0 AI models with open-source commitment Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic unveils new Claude AI models and ‘computer control’ appeared first on AI News. View the full article
  3. The EU AI Act is set to fully take effect in August 2026, but some provisions are coming into force even earlier. The legislation establishes a first-of-its-kind regulatory framework for AI systems, employing a risk-based approach that categorises AI applications based on their potential impact on safety, human rights, and societal wellbeing. “Some systems are banned entirely, while systems deemed ‘high-risk’ are subject to stricter requirements and assessments before deployment,” explains the DPO Centre, a data protection consultancy. Similar to GDPR, the Act’s extra-territorial reach means it applies to any organisation marketing, deploying, or using AI systems within the EU, regardless of where the system is developed. Businesses will be classified primarily as either ‘Providers’ or ‘Deployers,’ with additional categories for ‘Distributors,’ ‘Importers,’ ‘Product Manufacturers,’ and ‘Authorised Representatives.’ For organisations developing or deploying AI systems, particularly those classified as high-risk, compliance preparation promises to be complex. However, experts suggest viewing this as an opportunity rather than a burden. “By embracing compliance as a catalyst for more transparent AI usage, businesses can turn regulatory demands into a competitive advantage,” notes the DPO Centre. Key preparation strategies include comprehensive staff training, establishing robust corporate governance, and implementing strong cybersecurity measures. The legislation’s requirements often overlap with existing GDPR frameworks, particularly regarding transparency and accountability. Organisations must also adhere to ethical AI principles and maintain clear documentation of their systems’ functionality, limitations, and intended use. The EU is currently developing specific codes of practice and templates to assist with compliance obligations. For businesses uncertain about their obligations, experts recommend seeking professional guidance early. Tools like the EU AI Act Compliance Checker can help organisations verify their systems’ alignment with regulatory requirements. Rather than viewing compliance as merely a regulatory burden, forward-thinking organisations should view the EU’s AI Act as an opportunity to demonstrate commitment to responsible AI development and build greater trust with their customers. See also: AI governance gap: 95% of firms haven’t implemented frameworks Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post EU AI Act: Early prep could give businesses competitive edge appeared first on AI News. View the full article
  4. IBM has taken the wraps off its most sophisticated family of AI models to date, dubbed Granite 3.0, at the company’s annual TechXchange event. The Granite 3.0 lineup includes a range of models designed for various applications: General purpose/language: 8B and 2B variants in both Instruct and Base configurations Safety: Guardian models in 8B and 2B sizes, designed to implement guardrails Mixture-of-Experts: A series of models optimised for different deployment scenarios IBM claims that its new 8B and 2B language models can match or surpass the performance of similarly sized offerings from leading providers across numerous academic and industry benchmarks. These models are positioned as versatile workhorses for enterprise AI, excelling in tasks such as Retrieval Augmented Generation (RAG), classification, summarisation, and entity extraction. A key differentiator for the Granite 3.0 family is IBM’s commitment to open-source AI. The models are released under the permissive Apache 2.0 licence, offering a unique combination of performance, flexibility, and autonomy to both enterprise clients and the broader AI community. IBM believes that by combining a compact Granite model with proprietary enterprise data, particularly using their novel InstructLab alignment technique, businesses can achieve task-specific performance rivalling larger models at a fraction of the cost. Early proofs-of-concept suggest potential cost savings of up to 23x less than large frontier models. According to IBM, transparency and safety remain at the forefront of its AI strategy. The company has published a technical report and responsible use guide for Granite 3.0, detailing the datasets used, data processing steps, and benchmark results. Additionally, IBM offers IP indemnity for all Granite models on its watsonx.ai platform, providing enterprises with greater confidence when integrating these models with their own data. The Granite 3.0 8B Instruct model has shown particularly promising results, outperforming similar-sized open-source models from Meta and Mistral on standard academic benchmarks. It also leads across all measured safety dimensions on IBM’s AttaQ safety benchmark. IBM is also introducing the Granite Guardian 3.0 models, designed to implement safety guardrails by checking user prompts and LLM responses for various risks. These models offer a comprehensive set of risk and harm detection capabilities, including unique checks for RAG-specific issues such as groundedness and context relevance. The entire suite of Granite 3.0 models is available for download on HuggingFace, with commercial use options on IBM’s watsonx platform. IBM has also collaborated with ecosystem partners to integrate Granite models into various offerings, providing greater choice for enterprises worldwide. As IBM continues to advance its AI portfolio, the company says it’s focusing on developing more sophisticated AI agent technologies capable of greater autonomy and complex problem-solving. This includes plans to introduce new AI agent features in IBM watsonx Orchestrate and build agent capabilities across its portfolio in 2025. See also: Scoring AI models: Endor Labs unveils evaluation tool Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post IBM unveils Granite 3.0 AI models with open-source commitment appeared first on AI News. View the full article
  5. Google CEO Sundar Pichai has announced a series of structural changes and leadership appointments aimed at accelerating the company’s AI initiatives. The restructuring sees the Gemini app team, led by Sissie Hsiao, joining Google DeepMind under the leadership of Demis Hassabis. “Bringing the teams closer together will improve feedback loops, enable fast deployment of our new models in the Gemini app, make our post-training work proceed more efficiently and build on our great product momentum,” Pichai explained. Additionally, the Assistant teams focusing on devices and home experiences will be integrated into the Platforms & Devices division. This reorganisation aims to align these teams more closely with the product surfaces they are developing for and consolidate AI smart home initiatives at Google under one umbrella. Prabhakar Raghavan, a 12-year Google veteran, will transition from his current role to become the Chief Technologist at Google. Pichai praised Raghavan’s contributions, highlighting his leadership across various divisions including Research, Workspace, Ads, and Knowledge & Information (K&I). “Prabhakar’s leadership journey at Google has been remarkable,” Pichai noted. “He led the Gmail team in launching Smart Reply and Smart Compose as early examples of using AI to improve products, and took Gmail and Drive past one billion users.” Taking the helm of the K&I division will be Nick Fox, a long-standing Googler and member of Raghavan’s leadership team. Fox’s appointment as SVP of K&I comes on the back of his extensive experience across various facets of the company, including Product and Design in Search and Assistant, as well as Shopping, Travel, and Payments products. “Nick has been instrumental in shaping Google’s AI product roadmap and collaborating closely with Prabhakar and his leadership team on K&I’s strategy,” comments Pichai. “I frequently turn to Nick to tackle our most challenging product questions and he consistently delivers progress with tenacity, speed, and optimism.” The restructuring comes amid a flurry of AI-driven innovations across Google’s product lineup. Recent developments include the viral success of NotebookLM with Audio Overviews, enhancements to information discovery in Search and Lens, the launch of a revamped Google Shopping platform tailored for the AI era, advancements like AlphaProteo that could revolutionise protein design, and updates to the Gemini family of models. Pichai also highlighted a significant milestone in Google’s healthcare AI initiatives, revealing that their AI system for detecting diabetic retinopathy has conducted 600,000 screenings to date. The company plans to expand access to this technology across India and Thailand. “AI moves faster than any technology before it. To keep increasing the pace of progress, we’ve been making shifts to simplify our structures along the way,” Pichai explained. (Photo by Mitchell Luo) See also: Telefónica’s Wayra backs AI answer engine Perplexity Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Google announces restructuring to accelerate AI initiatives appeared first on AI News. View the full article
  6. Blockchain can become a potent force as the foundation of decentralised AI systems, transparent and fair – ensuring everyone can access not only the technology, but the rewards it delivers. Blockchain has enormous potential to democratise access to AI by addressing concerns around centralisation that have emerged with the growing dominance of companies like OpenAI, Google, and Anthropic. Decentralised AI systems built on blockchains can help to democratise access to essential AI resources like computing power, data, and large language models. They are sorely needed too; as AI models become more powerful, their thirst for data and computing power grows, increasing the barrier of entry to the industry. With blockchain, AI resources can be distributed across open, decentralised networks that anyone can access; levelling the playing field for smaller operators while fostering a spirit of openness and collaboration that’s essential to move the industry forward. Blockchain can create a more equitable system that ensures those who create the data used to train LLMs are fairly rewarded for their contributions. Challenges in decentralised data There’s a lot to like about the prospect of a decentralised AI ecosystem, but the reality will only emerge if some of the key challenges around data access, management, and analysis in blockchain are surmounted. For AI, blockchain can become a critical tool for secure, transparent, and verifiable data management, one that can be accessed by anyone. But blockchains have some architectural problems: they’re essentially a slow, single-table database that records information sequentially – not nearly flexible nor fast enough for the enormous volumes of data required by AI systems. Another challenge is that blockchains don’t integrate easily with other data environments nor other blockchains. Because of this, most enterprises that use blockchains are forced to deploy an array of point solutions to extract data from the ledger, transform it into a relational format, bring it into a traditional database, and move it into a data warehouse for analysis. Meanwhile, to bring external data onto any blockchain, it’s necessary to use complex and risky data oracles. All of these tools introduce centralisation and security risks into the equation. Innovative solutions pave the way Fortunately, a number of innovative solutions are being proposed to help smooth the integration of blockchains and AI. A case in point is Space and Time, creator of a decentralised data warehouse that replaces traditional data stacks and serves as a trustless intermediary between blockchains and enterprise data systems, enabling them to communicate seamlessly. Space and Time’s secret sauce is its Proof-of-SQL consensus mechanism, which cryptographically verifies the accuracy of SQL database queries and proves the underlying dataset hasn’t been tampered with. This enables smart contracts to interact with external data, paving the way for more sophisticated blockchain applications that use AI. For example, Space and Time can enable an AI chatbot like ChatGPT to access blockchain data without any modification. Formerly known for its modular AI blockchain, OG has recently rebranded itself as a ‘Decentralised AI Operating System’ called dAIOS. The system uses blockchain to coordinate decentralised resources for AI including storage, data availability, and compute power, so AI applications can operate securely and transparently on-chain while ensuring users retain control of the data that’s fed in. OG’s dAIOS has three main components – storage for managing large data volumes, ‘data availability’ for data verification, and ‘serving’ to power data retrieval, training, and inference – which can be used by any developer to access the resources needed to power their AI models. Looking to solve the challenge of blockchain data access, SQD is the creator of an advanced data indexing tool that works by aggregating on-chain data in parquet files and distributing them across nodes in a decentralised data lake. SQD is addressing the architectural inefficiencies of blockchain, namely the way data is stored sequentially in blocks, an architecture that makes it inefficient to query. Whenever an app needs access to blockchain data, it sends a query to whichever nodes hosts the desired data. Each node is assigned to a specific segment of blockchain data and SQD provides a detailed index of that information so dApps can quickly find what they need. It typically assigns the same blockchain data to multiple nodes to ensure availability, using an algorithm to manage query volumes. What will AI do for blockchain? Modern blockchain data infrastructures pave the way for a number of novel AI/blockchain applications. One of the most promising ***** in security. AI can enhance blockchain security by monitoring transactions and network activity to detect anomalies in real-time, and mitigate any suspicious activity. AI can also enhance the capabilities of smart contracts and make them much more intelligent. By using analytics, AI algorithms can predict any problems when they contract conditions are *********. AI-powered natural language processing algorithms can enable smart contracts to understand legal contracts. And generative AI technology can be used to automate the creation of smart contracts, eliminating the need to learn a specialised programming language like Solidity. The domain of tokenised real-world assets also stands to benefit from an infusion of AI, used to analyse the provenance and condition of RWAs like stocks and fine art. By correlating the analysis with market trends, AI can more accurately calculate the fair market value of tokens. AI can also be used to monitor real-time data fees to continuously update their values. Additionally, it can be used to automate the process of converting RWAs into digital tokens. Finally, AI can be used to predict future price movements of digital assets by monitoring market trends and industry news. Traders will be able to use the analysis to enhance their decision-making, hedge their investment portfolios and attempt to capitalise on market volatility. AI for everyone The AI industry is growing at an unprecedented pace, and the need for decentralisation is becoming more important to ensure the industry ******** open and competitive. Blockchain will provide the foundation for cutting-edge, decentralised AI models, leading to the creation of AI tools that cater to the needs of the majority, ones that focus on simplicity, privacy and ease-of-use. “Space and Time is thrilled to lead Web3 into a new era of data-driven smart contracts and the next generation of DeFi,” said Jay White PhD, Co-Founder and Head of Research at SxT, and the inventor of the Proof of SQL protocol. As AI and blockchain’s convergence gathers pace, the two technologies will democratise access to AI resources, reward data contributors fairly, and allow any company to use its proprietary data securely. It’s no wonder that industry experts like Miguel Palencia, co-founder of Qtum, express nothing but confidence in their potential. “Giving everyone true ownership and provenance of AI assets is of the utmost importance,” Palencia told Forbes. “There is a pressing need to address the concentration of AI power in the hands of a few companies.” Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI meets blockchain and decentralised data appeared first on AI News. View the full article
  7. Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework. Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, ***, and Germany. The findings show that 96% of organisations are already utilising AI to support business operations, with the same percentage planning to increase their AI budgets in the coming year. The primary motivations for AI investment include increasing productivity (82%), improving operational efficiency (73%), enhancing decision-making (65%), and achieving cost savings (60%). The most common AI use cases reported were customer service and support, predictive analytics, and marketing and ad optimisation. Despite the surge in AI investments, business leaders are acutely aware of the additional risk exposure that AI brings to their organisations. Data integrity and security emerged as the biggest deterrents to implementing new AI solutions. Executives also reported encountering various AI performance issues, including: Data quality issues (e.g., inconsistencies or inaccuracies): 41% Bias detection and mitigation challenges in AI algorithms, leading to unfair or discriminatory outcomes: 37% Difficulty in quantifying and measuring the return on investment (ROI) of AI initiatives: 28% While 95% of respondents expressed confidence in their organisation’s current AI risk management practices, the report revealed a significant gap in AI governance implementation. Only 5% of executives reported that their organisation has implemented any AI governance framework. However, 82% stated that implementing AI governance solutions is a somewhat or extremely pressing priority, with 85% planning to implement such solutions by summer 2025. The report also found that 82% of participants support an AI governance executive order to provide stronger oversight. Additionally, 65% expressed concern about IP infringement and data security. Mrinal Manohar, CEO of Prove AI, commented: “Executives are making themselves clear: AI’s long-term efficacy, including providing a meaningful return on the massive investments organisations are currently making, is contingent on their ability to develop and refine comprehensive AI governance strategies. “The wave of AI-focused legislation going into effect around the world is only increasing the urgency; for the current wave of innovation to continue responsibly, we need to implement clearer guardrails to manage and monitor the data informing AI systems.” As global regulations like the EU AI Act loom on the horizon, the report underscores the importance of de-risking AI and the work that still needs to be done. Implementing and optimising dedicated AI governance strategies has emerged as a top priority for businesses looking to harness the power of AI while mitigating associated risks. The findings of this report serve as a wake-up call for organisations to prioritise AI governance as they continue to invest in and deploy AI technologies. Responsible implementation and robust governance frameworks will be key to unlocking the full potential of AI while maintaining trust and compliance. (Photo by Rob Thompson) See also: Scoring AI models: Endor Labs unveils evaluation tool Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI governance gap: 95% of firms haven’t implemented frameworks appeared first on AI News. View the full article
  8. Endor Labs has begun scoring AI models based on their security, popularity, quality, and activity. Dubbed ‘Endor Scores for AI Models,’ this unique capability aims to simplify the process of identifying the most secure open-source AI models currently available on Hugging Face – a platform for sharing Large Language Models (LLMs), machine learning models, and other open-source AI models and datasets – by providing straightforward scores. The announcement comes as developers increasingly turn to platforms like Hugging Face for ready-made AI models, mirroring the early days of readily-available open-source software (OSS). This new release improves AI governance by enabling developers to “start clean” with AI models, a goal that has so far proved elusive. Varun Badhwar, Co-Founder and CEO of Endor Labs, said: “It’s always been our mission to secure everything your code depends on, and AI models are the next great frontier in that critical task. “Every organisation is experimenting with AI models, whether to power particular applications or build entire AI-based businesses. Security has to keep pace, and there’s a rare opportunity here to start clean and avoid risks and high maintenance costs down the road.” George Apostolopoulos, Founding Engineer at Endor Labs, added: “Everybody is experimenting with AI models right now. Some teams are building brand new AI-based businesses while others are looking for ways to slap a ‘powered by AI’ sticker on their product. One thing is for sure, your developers are playing with AI models.” However, this convenience does not come without risks. Apostolopoulos warns that the current landscape resembles “the wild west,” with people grabbing models that fit their needs without considering potential vulnerabilities. Endor Labs’ approach treats AI models as dependencies within the software supply chain “Our mission at Endor Labs is to ‘secure everything your code depends on,'” Apostolopoulos states. This perspective allows organisations to apply similar risk evaluation methodologies to AI models as they do to other open-source components. Endor’s tool for scoring AI models focuses on several key risk areas: Security vulnerabilities: Pre-trained models can harbour malicious code or vulnerabilities within model weights, potentially leading to security breaches when integrated into an organisation’s environment. Legal and licensing issues: Compliance with licensing terms is crucial, especially considering the complex lineage of AI models and their training sets. Operational risks: The dependency on pre-trained models creates a complex graph that can be challenging to manage and secure. To combat these issues, Endor Labs’ evaluation tool applies 50 out-of-the-box checks to AI models on Hugging Face. The system generates an “Endor Score” based on factors such as the number of maintainers, corporate sponsorship, release frequency, and known vulnerabilities. Positive factors in the system for scoring AI models include the use of safe weight formats, the presence of licensing information, and high download and engagement metrics. Negative factors encompass incomplete documentation, lack of performance data, and the use of unsafe weight formats. A key feature of Endor Scores is its user-friendly approach. Developers don’t need to know specific model names; they can start their search with general questions like “What models can I use to classify sentiments?” or “What are the most popular models from Meta?” The tool then provides clear scores ranking both positive and negative aspects of each model, allowing developers to select the most appropriate options for their needs. “Your teams are being asked about AI every single day, and they’ll look for the models they can use to accelerate innovation,” Apostolopoulos notes. “Evaluating Open Source AI models with Endor Labs helps you make sure the models you’re using do what you expect them to do, and are safe to use.” (Photo by Element5 Digital) See also: China Telecom trains AI model with 1 trillion parameters on domestic chips Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Scoring AI models: Endor Labs unveils evaluation tool appeared first on AI News. View the full article
  9. Telefónica’s corporate venture capital arm, Wayra, has announced its investment in AI answer engine Perplexity. Perplexity’s AI-driven platform aims to revolutionise internet information searches by providing real-time, accurate, and contextual answers to queries using natural language processing. Unlike traditional search engines that return a list of links, Perplexity understands the intent behind questions and delivers clear, concise answers from a curated set of relevant sources. The investment sees Wayra join an impressive roster of existing backers, including tech giants Nvidia and Databricks, as well as high-profile individuals such as Amazon founder Jeff Bezos and billionaire investor Stanley Druckenmiller. Venture capital firms Bessemer Venture Partners, NEA, and IVP have also placed their bets on Perplexity’s growth and expansion. In addition to the investment, Telefónica has inked a global commercial agreement with Perplexity. This partnership will offer preferential benefits to Telefónica customers in Brazil, the ***, and Spain, with plans to explore expansion into other markets. Telefónica has launched a Perplexity experience through television. The company has developed an exclusive living app within the Movistar Plus+ catalogue, allowing customers with compatible sets to interact with Perplexity directly through their TV. Users can ask questions on any topic using the Movistar Plus+ Voice Remote by pressing the Aura button, receiving AI-generated text answers in real-time, displayed on-screen and spoken aloud. Chema Alonso, Chief Digital Data Officer at Telefónica, said: “The investment in Perplexity is part of one of the new keys of Wayra’s investment strategy in leading technology markets such as the US, in high-growth companies with global presence and reach, and leaders in technologies such as AI. “With this commercial agreement, we are also reinforcing Telefónica’s commitment to integrating cutting-edge technologies into the company and offering transformative experiences like Perplexity’s answer engine to our customers”. Perplexity’s AI-powered approach sets it apart from traditional search engines by offering a conversational interface and personalised results. The platform also provides a clear list of cited sources, enhancing the accuracy and relevance of its answers. Aravind Srinivas, Co-Founder and CEO of Perplexity, commented: “This investment from Wayra not only validates our AI-powered approach but also opens up exciting opportunities for global expansion.” By bringing Perplexity’s capabilities to television screens, Telefónica is delivering new ways for users to access information and interact with AI in their homes. “We look forward to continuing to partner with Telefónica to push the boundaries of what’s possible in AI-driven information discovery and to create transformative experiences for users worldwide,” concludes Srinivas. The investment from Telefónica comes as research suggests the telecoms industry is leading in generative AI adoption. See also: King’s Business School: How AI is transforming problem-solving Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Telefónica’s Wayra backs AI answer engine Perplexity appeared first on AI News. View the full article
  10. Machine learning (ML) is revolutionising the way businesses operate, driving innovation, and unlocking new possibilities across industries. By leveraging vast amounts of data and powerful algorithms, ML enables companies to automate processes, make accurate predictions, and uncover hidden patterns to optimise performance. From personalised customer experiences to predictive maintenance and advanced ****** detection, the potential of ML is limitless. Machine learning is a subset of artificial intelligence used to develop algorithms and statistical models to enable computers to perform specific tasks without the need for instructions. Businesses have started to incorporate machine learning app development services and functionality into their processes, applications, and practices to ensure optimal efficiency. By leveraging these services, companies can integrate advanced machine learning capabilities in their operations, enabling automation, data-driven decision-making, and performance optimisation. Integration empowers businesses to stay competitive in today’s fast-paced digital landscape by unlocking new insights and streamlining processes for smarter, more efficient operations. Below we discuss machine learning innovation that transforms business applications. Five innovative ways to use machine learning in businesses Machine-learning statistics have shown that different industries can benefit from using innovative machine-learning methods to get ahead in business processes. 1. Enhancing customer experience through machine learning Businesses must enhance their customer experiences to build loyalty and drive engagement. Two effective ML strategies can be used to help enhance the overall experience of customers. Chatbots and virtual assistants: These can help transform customer services by providing round-the-clock support for customers who need assistance. They can handle various tasks like answering questions and assisting with inquiries. Some of the benefits included in using these machine learning technologies include: 24/7 availability: Chatbots are available any time, so employees do not need to work extra shifts or at night to be available. Unsupervised machine learning systems use artificial neural networks to continue interacting with customers and retain existing customers. Speed and efficiency: Chatbots and virtual assistants can process information quicker than humans and eliminate wait times for customers. Providing training data, and using data science will allow chatbots to communicate with customers. Scalability: Chatbots can be changed and will evolve to handle more than one task, like dealing with multiple inquiries at the same time, and provide businesses with the opportunity to use technology without needing to increase their staff. Personalisation: Based on customer data, chatbots and virtual assistants can personalise their interactions with customers like using real names, remembering past interactions and providing responses that are tailored to what the customer is requesting. Personalised recommendations: Using data analysis and machine learning can tailor personalised suggestions to customers based on past interactions, behaviours, and preferences. Here are some of the benefits of using personalised recommendations to improve the overall shopping experience. Data utilisation: Businesses can identify patterns and preferences by analysing customer data. For example, an e-commerce platform can use a customer’s browser history to track their interests. Increased engagement: Creating personalised recommendations will increase user engagement. Customised suggestions will lead to customers making more purchases, and this will increase an individual customer’s time on-platform, helping you understand what the customer needs and wants. Customer loyalty: Understanding what customers want and prefer will create customer loyalty because customers will feel that their needs and wants are being recognised and will continue to purchase from the business. 2. Machine learning optimising operations Businesses need to optimise business processes to enhance efficiency, improve overall performance and reduce costs. For optimisation processes, there are two key areas of focus: managing the supply chain and predictive maintenance. Supply chain management: This strategy focuses on improving the effectiveness and efficiency of the supply chain from the purchase of raw materials to the finished products. These are some key strategies that can be used in supply chain optimisation: Forecasting demand: Advanced analytics can be used to predict customer demand more accurately. This will help business operations by reducing the costs associated with excess stock and align inventory levels with sales. Inventory management: Implementing inventory management practices will help minimise expenses to the business and ensure that it has enough stock on hand when needed. Management can be achieved by using automated inventory tracking systems. Supplier collaboration: Collaborating with suppliers can help improve communication and build stronger relationships. Logistics optimisation: Transportation routes and methods can be analysed to improve delivery times for businesses and increase productivity. Businesses can use software development to help optimise and consider alternative transportation modes. Technology integration: Using technology in the business can help with business decisions like whether to use blockchain for transparency, automation for efficiency and AI for predictive analytics. Predictive maintenance: This process involves using machine learning and data analytics to predict when a machine or equipment is likely to require maintenance or fail. Here are some of the key factors of predictive maintenance: Data analysis: Machine learning algorithms analyse collected data to help predict outcomes like machine ********. This can help businesses schedule maintenance ahead of time to avoid loss of production. Data collection: Processes of data collection use sensors and IoT devices to collect data in real-time from machines like operational performance metrics, vibration and temperature. Implementation: Businesses can schedule maintenance of machinery during non-peak hours or when equipment is least used to reduce the delay in production. Continuous improvement: Using machine learning systems can lead to making more accurate predictions and help improve maintenance strategy for business processes. 3. Data-driven decision making Using data-driven decision-making for business decision-making is a strategic approach which will help guide business decisions. Companies can use business intelligence, marketing innovations, analytics and risk management to enhance the operational efficiency of their business applications. Here is how each component will advance the company’s processes. Business intelligence and analytics: These refer to the practices and technologies that are used to provide analysis, collect and present business data. The key aspects of this approach include: Data visualisation: Business intelligence can help employees understand complex data points of the business in visual reports and by providing dashboards where this data is easily accessible. Descriptive analytics: Using historical data to understand the past performance of the business can influence future decisions by creating a machine learning model, and businesses can collect data to have analytics on hand. Predictive analytics: Using machine learning for business techniques and statistical models can help predict outcomes for the business. Prescriptive analytics: Prescriptive analytics will recommend actions based on predictive insights. Risk management: Using data-driven decision-making can be effective for managing risk in the business. The following methods can be used to identify, mitigate and assess risks in the business. Scenario analysis: Machine learning models can represent scenarios to prepare for any risks that could affect the business. Risk assessment models: Businesses can use machine learning capabilities to help develop models to predict and analyse potential risks. Real-time monitoring: Machine learning applications can help monitor any risks in real-time to be able to manage any risks to the business. Compliance and regulatory monitoring: Machine learning systems can be used to help businesses stay compliant with regulations by constantly monitoring business activities. Marketing innovations have specific key insights into how businesses can manage risks to the business. These key innovations include: Customer segmentation: Businesses can segment their audience based on their preferences, behaviors and demographics. Personalisation: Data analytics can help businesses deliver personalised customer experiences by tailoring offers or messages to enhance customer engagement. A/B testing: Businesses can use a machine learning algorithm to conduct A/B testing of marketing campaigns, product offerings and website designs. Predictive customer analytics: This can help businesses predict future purchasing patterns using product recommendations and targeted promotions. 4. Human resources transformation Businesses can transform human resources as a strategy to enhance HR functions and ensure that they align with their business goals and adapt to the evolving workplace. Talent acquisition and employee engagement are two of the critical components used in this transformation. Employee engagement has key elements that can foster a committed workforce. These key elements include: Continuous feedback and communication: Businesses can use machine learning models to get feedback from employees and put systems in place to help regular check-ins with staff. Employee well-being: Wellness programmes can be implemented to enhance the well-being of employees. Career development opportunities: Using machine learning, businesses can provide training programmes for employees to advance their skills. Talent acquisition can incorporate artificial intelligence tools to scan for the best candidates to fill any vacancies. Using a machine learning system to find the most suitable candidates will eliminate the need for traditional recruitment practices, ensure that the candidate has the correct job experience, and help keep track of the applicants by staying in communication and improving the hiring process. 5. Industry specific applications Using machine learning in applications will enhance efficiency, compliance and service delivery in industries like financial institutions and healthcare. For healthcare applications, machine learning algorithms are used in the following ways: Electronic health records: Patient care can be streamlined using machine learning models to provide healthcare workers with access to patient information quickly. Telemedicine: Allowing remote consultations for those patients who are unable to leave their homes or those who live in rural areas where doctors are not easily accessible. Health information exchange: Allows patient information to be shared among colleagues and different healthcare providers to improve patient treatment. For finance applications: Automated trading systems: Machine learning systems can help analyse market data and trends to help businesses and customers make informed decisions when trading. Blockchain technology: Machine learning algorithmn offer a transparent and tamper-proof ledger, reducing the cost of transactional data and enhancing security. Robo-advisors: These are automated investment management services which allow users to get advice on how to set investment goals and minimise their risk. ****** detection systems: To assist in ****** detection, machine learning systems prevent financial losses and protect customer data. Future trends in machine learning The advancement of AI technologies like deep learning, natural language processing, and reinforcement learning will lead to significant advancements in machine learning. Advances will also increase use by businesses of all sizes by allowing new tools to be incorporated into existing business practices, like using cloud-based platforms or open-source frameworks to leverage machine learning systems without requiring extensive technical expertise. For innovation across various industries, machine learning systems can be implemented to optimise processes, develop new services and products, and identify trends. Conclusion Machine learning will evolve as technology advances and the future of machine learning applications will arrive rapidly. Businesses will have increased productivity by using AI to unlock new opportunities to enhance their operations. The post Innovative machine learning uses transforming business applications appeared first on AI News. View the full article
  11. Four major US firms have announced plans to invest a combined £6.3 billion in *** data infrastructure. The announcement, made during the International Investment Summit, was welcomed by Technology Secretary Peter Kyle as a “vote of confidence” in Britain’s approach to partnering with businesses to drive growth. CyrusOne, ServiceNow, CloudHQ, and CoreWeave have all committed to substantial investments, bringing the total investment in *** data centres to over £25 billion since the current government took office. These new facilities will provide the *** with increased computing power and data storage capabilities, essential for training and deploying next-generation AI technologies. “Tech leaders from all over the world are seeing Britain as the best place to invest with a thriving and stable market for data centres and AI development,” stated Kyle. The largest single investment comes from Washington DC-based CloudHQ, which plans to develop a £1.9 billion data centre campus in Didcot, Oxfordshire. This hyper-scale facility is expected to create 1,500 jobs during construction and 100 permanent positions once operational. ServiceNow has pledged £1.15 billion over the next five years to expand its *** operations. This investment will support AI development, expand data centres with Nvidia GPUs for local processing of LLM data, and grow the company’s *** workforce beyond its current 1,000 employees. ServiceNow also plans to offer new skills programmes to reach 240,000 *** learners. ServiceNow’s AI platform is already utilised by 85% of Fortune 500 companies and more than half of the FTSE100. In the ***, the company works with organisations including BT Group, Aston Martin Aramco Formula One Team, and hundreds of public sector bodies such as the NHS and the Department for Work and Pensions. Rachel Reeves, Chancellor of the Exchequer, commented: “This investment is a huge vote of confidence in the ***’s tech and AI sector, and is exactly the kind we want to see as we grow the economy. That’s what the International Investment Summit is all about too. Showing global investors and business that Britain is open for business.” CyrusOne, a leading global data centre developer, announced plans to invest £2.5 billion in the *** over the coming years. Subject to planning permission, their projects are expected to be operational by Q4 2028 and create over 1,000 jobs. AI hyperscaler CoreWeave confirmed an additional £750 million investment to support the next generation of AI cloud infrastructure, building on its £1 billion investment announced in May. These investments follow recent commitments from other tech giants, including Blackstone’s £10 billion investment in the North East of England and Amazon Web Services’ plan to invest £8 billion in *** data centres over the next five years. The *** government has been actively supporting the growth of data infrastructure and the broader tech sector. Last month, data centres were classified as ‘Critical National Infrastructure’ (CNI), providing the industry with greater government support. Additionally, the Tech Secretary appointed entrepreneur Matt Clifford to develop an AI Opportunities Action Plan, aimed at boosting AI adoption across the economy. As part of the ongoing International Investment Summit, Prime Minister Keir Starmer is bringing together 300 industry leaders to catalyse investment in the ***. The summit will see discussions on how the *** can capitalise on emerging growth sectors including health tech, AI, clean energy, and creative industries. Bill McDermott, Chairman and CEO of ServiceNow, said: “The *** is embracing technology transformation at scale. In this new age of AI, the country continues to be a global leader in driving innovation for the benefit of all its communities. “Our investment accelerates the ***’s push to put AI to work, empowering people, enriching experiences, and strengthening societal bonds. Together, ServiceNow and our customers across the *** are delivering a future where technology benefits everyone.” The series of investments and government initiatives bolstering *** data infrastructure aims to secure the country’s leadership in AI and technology innovation within Europe, and reinforces it as an attractive destination for international tech companies seeking to expand their operations. (Photo by Freddie Collins) See also: King’s Business School: How AI is transforming problem-solving Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** secures £6.3B in data infrastructure investments appeared first on AI News. View the full article
  12. When we talk about real-time data, what we refer to is information that becomes available as soon as it’s created and acquired. Rather than being stored, data is forwarded directly to an application as soon as it’s collected and is made immediately available – without any lag – to support live, in-the-moment decision-making. Real-time data is at work in virtually every aspect of our lives already, powering everything from bank transactions to GPS to emergency maps created when a disaster occurs. The defining characteristic of real-time data is time sensitivity. Real-time data and its associated insights expire incredibly quickly. So, to make the most of it, it must be analysed and capitalised on without delay. One example is nautical navigation software, which must gather hundreds of thousands of data points per second to provide weather, wave, and wind data that is accurate to the minute. To do otherwise is to endanger people whose lives depend on this data, like ship crews. Another example is patient monitoring at a major hospital. Devices transmit patient data – like heartbeat and respiratory rate, blood pressure, or oxygen saturation – to cloud-based software. If any of these vital indicators drop below a certain threshold, then alerts must go out to hospital staff, who can then respond quickly to the issue and decide how to proceed. By providing more actionable insights, real-time data and analytics empower organisations to make better decisions more quickly. Let’s imagine a stock trading algorithm that’s mis-timing the market and selling too late or purchasing too early. Without real-time data, this issue would only be detected and resolved after it occurred. But with real-time data and analytics, the problem can be identified and fixed almost immediately. Real-time data for autonomous decisions While real time data is already a part of our lives, there is still a lot of room for improvement—and there’s a lot of promise regarding its integration with other hot technologies, like blockchain and AI. By combining the three technologies, it’s possible to create potentially game-changing applications that not only understand what’s happening in the world immediately, but can actually make decisions and take action on those events, in a fully automated and, better yet, decentralised way. It’s the promise of truly autonomous, intelligent applications that require little to no human input. Today’s blockchain networks already host autonomous applications that make use of ‘smart contracts,’ which are self-executing agreements programmed to take actions when specific conditions are met. The most popular applications for this technology can be found in decentralised finance, like a lending and borrowing protocol that enables anyone to take out a cryptocurrency loan by depositing collateral into a smart contract. As soon as the collateral is deposited, the funds will be loaned to the user automatically. Should the borrower default on the repayments, the underlying smart contract will liquidate the loan, distributing the collateral among those who provided funds to the protocol’s liquidity pool. Decentralised applications are intriguing because of the way they make use of real-time data autonomously, eliminating the middleman. Yet their potential has so far been held back by a major limitation. The smart contacts that power them just aren’t that smart, as they can only receive and act on blockchain-based data. This is where artificial intelligence systems come into play, paving the way for a new kind of innovation known as ‘intelligent contracts’ powered by large language models. This is the concept behind GenLayer, a new blockchain project that’s integrated with generative AI. Its intelligent contracts are similar to traditional smart contracts, but the difference is they really are quite smart. They can process natural language as well as code; they can access the internet and know what’s going on in the real world; and they can use what they learn to make subjective decisions. To explain the difference between smart and intelligent contracts, GenLayer draws a comparison between a simple vending machine and a personal assistant. With a vending machine, you simply insert a coin (the input), select the product you want (action), and wait for the machine to ***** out the item (output) according to how the machine has been programmed. The vending machine has only been designed to perform one specific action and it can only follow its pre-programmed instructions. On the other hand, a personal assistant can do more. Being human (and intelligent), they can understand instructions in different forms and ******** an almost-unlimited range of commands based on those instructions. So, unlike the vending machine, the personal assistant can adapt and take different actions—without being pre-programmed to do anything. Intelligent contracts make intelligent apps Using intelligent contracts, the opportunities for dApp (distributed applications) developers are almost endless. They’ll be able to build dApps that can search the internet, understand the world around them, and respond to events in local weather reports, sports results or financial markets—and much more besides. Possible examples include an insurance protocol dApp that automatically pays out damages to claimants in real-time, based on the real world information it receives to verify their claim. Or, a sports ******** app could immediately pay out the winnings to a punter who bets on the correct horse. In DeFi, the applications of intelligent contracts extend to on-chain verification, uncollateralised lending, and interest rates that automatically adjust based on market conditions. AI, blockchain, and real-time data have proven to be revolutionary technologies, and it’s only recently that the technology industry has begun to explore what can happen when the three technologies intersect. It’s a nascent sector that’s sure to be the subject of much attention in the months and years to come, but already, GenLayer’s intelligent contracts are paving the way for some truly innovative use-cases. The post Real-time data, blockchain, and AI: A game-changer for intelligent apps appeared first on AI News. View the full article
  13. China Telecom, one of the country’s state-owned telecom giants, has created two LLMs that were trained solely on domestically-produced chips. This breakthrough represents a significant step in China’s ongoing efforts to become self-reliant in AI technology, especially in light of escalating US limitations on access to advanced semiconductors for its competitors. According to the company’s Institute of AI, one of the models, TeleChat2-115B and another unnamed model were trained on tens of thousands of ********-made chips. This achievement is especially noteworthy given the tighter US export rules that have limited China’s ability to purchase high-end processors from Nvidia and other foreign companies. In a statement shared on WeChat, the AI institute claimed that this accomplishment demonstrated China’s capability to independently train LLMs and signals a new era of innovation and self-reliance in AI technology. The scale of these models is remarkable. China Telecom stated that the unnamed LLM has one trillion parameters. In AI terminology, parameters are the variables that help the model in learning during training. The more parameters there are, the more complicated and powerful the AI becomes. ******** companies are striving to keep pace with global leaders in AI based outside the country. Washington’s export restrictions on Nvidia’s latest AI chips such as the A100 and H100, have compelled China to seek alternatives. As a result, ******** companies have developed their own processors to reduce reliance on Western technologies. For instance, the TeleChat2-115B model has approximately 100 billion parameters, and therefore can perform as well as mainstream platforms. China Telecom did not specify which company supplied the domestically-designed chips used to train its models. However, as previously discussed on these pages, Huawei’s Ascend chips play a key part in the country’s AI plans. Huawei, which has faced US penalties in recent years, is also increasing its efforts in the artificial intelligence field. The company has recently started testing its latest AI processor, the Ascend 910C, with potential clients waiting in the domestic market. Large ******** server companies, as well as internet giants that have previously used Nvidia chips, are apparently testing the new chip’s performance. Huawei’s Ascend processors, as one of the few viable alternatives to Nvidia hardware, are viewed as a key component of China’s strategy that will lessen its reliance on foreign technology. In addition to Huawei, China Telecom is collaborating with other domestic chipmakers such as Cambricon, a ******** start-up specialising in AI processors. The partnerships reflect a broader tendency in China’s tech industry to build a homegrown ecosystem of AI solutions, further shielding the country from the effects of US export controls. By developing its own AI chips and technology, China is gradually reducing its dependence on foreign-made hardware, especially Nvidia’s highly sought-after and therefore expensive GPUs. While US sanctions make it difficult for ******** companies to obtain the latest Nvidia hardware, a ****** market for foreign chips has emerged. Rather than risk operating in the grey market, many ******** companies prefer to purchase lower-powered alternatives such as previous-gen models to maintain access to Nvidia’s official support and services. China’s achievement reflects a broader shift in its approach to AI and semiconductor technology, emphasising self-sufficiency and resilience in an increasingly competitive global economy and in face of ********* protectionist trade policies. (Photo by Mark Kuiper) See also: Has Huawei outsmarted Apple in the AI race? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post China Telecom trains AI model with 1 trillion parameters on domestic chips appeared first on AI News. View the full article
  14. While AI improves the detection of cybersecurity threats, it simultaneously ushers in more advanced challenges. Research from Keeper Security finds that, despite the implementation of AI-related policies, many organisations remain inadequately prepared for AI-powered threats. 84% of IT and security leaders find AI-enhanced tools have exacerbated the challenge of detecting phishing and smishing attacks, which were already significant threats. In response, 81% of organisations have enacted AI usage policies for employees. Confidence in these measures runs high, with 77% of leaders expressing familiarity with best practices for AI security. Gap between AI cybersecurity policy and threats preparedness More than half (51%) of security leaders view AI-driven attacks as the most severe threat to their organisations. Alarmingly, 35% of respondents feel ill-prepared to address these attacks compared to other cyber threats. Organisations are deploying several key strategies to meet these emerging challenges: Data encryption: Utilised by 51% of IT leaders, encryption serves as a crucial defence against unauthorised access and is vital against AI-fuelled attacks. Employee training and awareness: With 45% of organisations prioritising enhanced training programmes, there is a focused effort to equip employees to recognise and counter AI-driven phishing and smishing intrusions. Advanced threat detection systems: 41% of organisations are investing in these systems, underscoring the need for improved detection and response to sophisticated AI threats. The advent of AI-driven cyber threats undeniably presents new challenges. Nevertheless, fundamental cybersecurity practices – such as data encryption, employee education, and advanced threat detection – continue to be essential. Organisations must ensure these essential measures are consistently re-evaluated and adjusted to counter emerging threats. In addition to these core practices, advanced security frameworks like zero trust and Privileged Access Management (PAM) solutions can bolster an organisation’s resilience. Zero trust demands continuous verification of all users, devices, and applications, reducing the risk of unauthorised access and minimising potential damage during an *******. PAM offers targeted security for an organisation’s most sensitive accounts, crucial for defending against complex AI-driven threats that aim at high-level credentials. Darren Guccione, CEO and Co-Founder of Keeper Security, commented: “AI-driven attacks are a formidable challenge, but by reinforcing our cybersecurity fundamentals and adopting advanced security measures, we can build resilient defences against these evolving threats.” Proactivity is also key for organisations—regularly reviewing security policies, performing routine audits, and fostering a culture of cybersecurity awareness are all essential. While organisations are advancing, cybersecurity requires perpetual vigilance. Merging traditional practices with modern approaches like zero trust and PAM will empower organisations to maintain an edge over developing AI-powered threats. (Photo by Growtika) See also: King’s Business School: How AI is transforming problem-solving Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Many organisations unprepared for AI cybersecurity threats appeared first on AI News. View the full article
  15. In the near future, Silicon Valley might look back at recent events as the point where the generative AI craze went too far. This past summer, investors questioned whether top AI stocks could sustain their sky-high valuations, given the lack of returns on massive AI spending. As Autumn approaches, major AI sectors—such as chips, LLMs, and AI devices—received renewed confidence. Nonetheless, there are an increasing number of reasons to be cautious. Cerebras: A chip contender with a major risk Chip startup Cerebras is challenging Nvidia’s dominance by developing processors designed to power smarter LLMs. Nvidia, a major player in the AI *****, has seen its market cap skyrocket from $364 billion at the start of 2023 to over $3 trillion. Cerebras, however, relies heavily on a single customer: the Abu Dhabi-based AI firm G42. In 2023, G42 accounted for 83% of Cerebras’ revenue, and in the first half of 2024, that figure increased to 87%. While G42 is backed by major players like Microsoft and Silver Lake, its dependency poses a risk. Even though Cerebras has signed a deal with Saudi Aramco, its reliance on one client may cause concerns as it seeks a $7-8 billion valuation for its IPO. OpenAI’s record-breaking funding – but with strings attached OpenAI made the news when it raised $6.6 billion at a $157 billion valuation, becoming the largest investment round in Silicon Valley history. However, the company has urged its investors not to back competitors such as Anthropic and Elon Musk’s xAI—an unusual request in the world of venture capital, where spread ******** is common. Critics, including Gary Marcus, have described this approach as “running scared.” OpenAI’s backers also include “bubble chasers” such as SoftBank and Tiger Global, firms known for investing in companies at their peak, which frequently results in huge losses. With top executives such as CTO Mira Murati departing and predicted losses of $5 billion this year despite rising revenues, OpenAI faces significant challenges. Meta’s big bet on AI wearables Meta entered the AI race by unveiling Orion, its augmented reality glasses. The wearables promise to integrate AI into daily life, with Nvidia’s CEO Jensen Huang endorsing the product. However, at a production cost of $10,000 per unit, the price is a major obstacle. Meta will need to reduce costs and overcome consumer hesitation, as previous attempts at AI-powered wearables—such as Snapchat’s glasses, Google Glass, and the Humane AI pin—have struggled to gain traction. The road ahead What’s next for AI? OpenAI must prove it can justify a $157 billion valuation while operating at a loss. Cerebras needs to reassure investors that relying on one client isn’t a dealbreaker. And Meta must convince consumers to adopt a completely new way of interacting with AI. If these companies succeed, this moment could mark a turning point in the AI revolution. However, as tech history shows, high-stakes markets are rarely easy to win. (Photo by Growtika) See also: Ethical, trust and skill barriers hold back generative AI progress in EMEA Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post The risks behind the generative AI craze: Why caution is growing appeared first on AI News. View the full article
  16. As artificial intelligence systems increasingly permeate critical decision-making processes in our everyday lives, the integration of ethical frameworks into AI development is becoming a research priority. At the University of Maryland (UMD), interdisciplinary teams tackle the complex interplay between normative reasoning, machine learning algorithms, and socio-technical systems. In a recent interview with Artificial Intelligence News, postdoctoral researchers Ilaria Canavotto and Vaishnav Kameswaran combine expertise in philosophy, computer science, and human-computer interaction to address pressing challenges in AI ethics. Their work spans the theoretical foundations of embedding ethical principles into AI architectures and the practical implications of AI deployment in high-stakes domains such as employment. Normative understanding of AI systems Ilaria Canavotto, a researcher at UMD’s Values-Centered Artificial Intelligence (VCAI) initiative, is affiliated with the Institute for Advanced Computer Studies and the Philosophy Department. She is tackling a fundamental question: How can we imbue AI systems with normative understanding? As AI increasingly influences decisions that impact human rights and well-being, systems have to comprehend ethical and legal norms. “The question that I investigate is, how do we get this kind of information, this normative understanding of the world, into a machine that could be a ******, a chatbot, anything like that?” Canavotto says. Her research combines two approaches: Top-down approach: This traditional method involves explicitly programming rules and norms into the system. However, Canavotto points out, “It’s just impossible to write them down as easily. There are always new situations that come up.” Bottom-up approach: A newer method that uses machine learning to extract rules from data. While more flexible, it lacks transparency: “The problem with this approach is that we don’t really know what the system learns, and it’s very difficult to explain its decision,” Canavotto notes. Canavotto and her colleagues, Jeff Horty and Eric Pacuit, are developing a hybrid approach to combine the best of both approaches. They aim to create AI systems that can learn rules from data while maintaining explainable decision-making processes grounded in legal and normative reasoning. “[Our] approach […] is based on a field that is called artificial intelligence and law. So, in this field, they developed algorithms to extract information from the data. So we would like to generalise some of these algorithms and then have a system that can more generally extract information grounded in legal reasoning and normative reasoning,” she explains. AI’s impact on hiring practices and disability inclusion While Canavotto focuses on the theoretical foundations, Vaishnav Kameswaran, affiliated with UMD’s NSF Institute for Trustworthy AI and Law and Society, examines AI’s real-world implications, particularly its impact on people with disabilities. Kameswaran’s research looks into the use of AI in hiring processes, uncovering how systems can inadvertently discriminate against candidates with disabilities. He explains, “We’ve been working to… open up the ****** box a little, try to understand what these algorithms do on the back end, and how they begin to assess candidates.” His findings reveal that many AI-driven hiring platforms rely heavily on normative behavioural cues, such as eye contact and facial expressions, to assess candidates. This approach can significantly disadvantage individuals with specific disabilities. For instance, visually impaired candidates may struggle with maintaining eye contact, a signal that AI systems often interpret as lack of engagement. “By focusing on some of those qualities and assessing candidates based on those qualities, these platforms tend to exacerbate existing social inequalities,” Kameswaran warns. He argues that this trend could further marginalise people with disabilities in the workforce, a group already facing significant employment challenges. The broader ethical landscape Both researchers emphasise that the ethical concerns surrounding AI extend far beyond their specific areas of study. They touch on several key issues: Data privacy and consent: The researchers highlight the inadequacy of current consent mechanisms, especially regarding data collection for AI training. Kameswaran cites examples from his work in India, where vulnerable populations unknowingly surrendered extensive personal data to AI-driven loan platforms during the COVID-19 pandemic. Transparency and explainability: Both researchers stress the importance of understanding how AI systems make decisions, especially when these decisions significantly impact people’s lives. Societal attitudes and biases: Kameswaran points out that technical solutions alone cannot solve discrimination issues. There’s a need for broader societal changes in attitudes towards marginalised groups, including people with disabilities. Interdisciplinary collaboration: The researchers’ work at UMD exemplifies the importance of cooperation between philosophy, computer science, and other disciplines in addressing AI ethics. Looking ahead: solutions and challenges While the challenges are significant, both researchers are working towards solutions: Canavotto’s hybrid approach to normative AI could lead to more ethically-aware and explainable AI systems. Kameswaran suggests developing audit tools for advocacy groups to assess AI hiring platforms for potential discrimination. Both emphasise the need for policy changes, such as updating the Americans with Disabilities Act to address AI-related discrimination. However, they also acknowledge the complexity of the issues. As Kameswaran notes, “Unfortunately, I don’t think that a technical solution to training AI with certain kinds of data and auditing tools is in itself going to solve a problem. So it requires a multi-pronged approach.” A key takeaway from the researchers’ work is the need for greater public awareness about AI’s impact on our lives. People need to know how much data they share or how it’s being used. As Canavotto points out, companies often have an incentive to obscure this information, defining them as “Companies that try to tell you my service is going to be better for you if you give me the data.” The researchers argue that much more needs to be done to educate the public and hold companies accountable. Ultimately, Canavotto and Kameswaran’s interdisciplinary approach, combining philosophical inquiry with practical application, is a path forward in the right direction, ensuring that AI systems are powerful but also ethical and equitable. See also: Regulations to help or hinder: Cloudflare’s take Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Bridging code and conscience: UMD’s quest for ethical and inclusive AI appeared first on AI News. View the full article
  17. A new study by researchers at King’s Business School and Wazoku has revealed that AI is transforming global problem-solving. The report found that nearly half (46%) of Wazoku’s 700,000-strong network of problem solvers had utilised generative AI (GenAI) to work on innovative ideas over the past year. This network – known as the Wazoku Crowd – comprises a diverse group of professionals including scientists, pharmacists, engineers, PhD students, CEOs, start-ups, and business leaders. Perhaps more strikingly, almost a quarter (22%) of respondents reported using GenAI or LLM tools such as ChatGPT and Claude for at least half of their idea submissions, with 8% employing these technologies for every single submission. Of those using GenAI, 47% are leveraging it specifically for idea generation. The Wazoku Crowd’s collective intelligence is harnessed to solve ‘challenges’ – requests for ideas submitted by enterprises – with an impressive success rate of over 80%. Simon Hill, CEO of Wazoku, commented on the findings: “There’s an incredible amount of hype with GenAI, but alongside that there is enormous curiosity. Getting immersed in something and being curious is an innovator’s dream, so there is rich potential with GenAI.” However, Hill also urged caution: “A note of caution, though – it is best used to generate interest, not solutions. Human ingenuity and creativity are still best, although using GenAI can undoubtedly make that process more effective.” The study revealed that the most common application of GenAI was in research and learning, with 85% of respondents using it for this purpose. Additionally, around one-third of the Wazoku Crowd employed GenAI for report structuring, writing, and data analysis and insight. The research was conducted in partnership with Oguz A. Acar, Professor of Marketing and Innovation at King’s Business School, King’s College London. Professor Acar viewed the study as a crucial first step towards understanding AI’s potential and limitations in tackling complex innovation challenges. “Everyone’s trying to figure out what AI can and can’t do, and this survey is a step forward in understanding that,” Professor Acar stated. “It reveals that some crowd members view GenAI as a valuable ally, using it to research, create, and communicate more effectively.” “While perhaps it’s no surprise that those open to innovation are curious about new tools, the survey also shows mixed opinions. Most people haven’t used GenAI tools yet, highlighting that we’re only beginning to uncover AI’s potential in innovative problem-solving.” Wazoku collaborates with a range of customers, including Sanofi, A2A, Bill & Melinda Gates Foundation, and numerous global enterprise businesses, government departments, and not-for-profits, to crowdsource ideas and innovation. Recently, Wazoku launched its own conversational AI to aid innovation. Dubbed Jen AI, this digital innovation assistant has access to Wazoku’s connected innovation management suite—aimed at accelerating decision-making around innovation and enhancing productivity to deliver consistent, scalable results. “The solutions to the world’s problems are complex, and the support of AI brings vast benefits in terms of efficiency, creativity, and insight generation,” explained Hill. As the adoption of AI in innovation processes continues to grow, it’s clear that – while these tools offer significant potential – they are best used to augment rather than replace human creativity and problem-solving skills. (Photo by Ally Griffin) See also: Ivo Everts, Databricks: Enhancing open-source AI and improving data governance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post King’s Business School: How AI is transforming problem-solving appeared first on AI News. View the full article
  18. As AI reshapes the digital landscape, tech companies find themselves in a high-stakes game of regulatory chess, with each move potentially changing the possibilities stemming from innovation. The game board is especially intricate for global infrastructure providers like Cloudflare, involving as it does cybersecurity, data privacy, and content moderation in a complex policy framework. “No one wants to miss the boat,” says Alissa Starzak, the company’s deputy chief legal officer and global head of public policy, referring to the rush to regulate AI. Yet, she cautions the tension between urgent action and measured response that encapsulates the complex balancing act Cloudflare navigates daily. In a recent interview with Artificial Intelligence News, Starzak revealed how the internet infrastructure giant is working to shape a regulatory framework that fosters innovation while safeguarding against emerging cyber threats. The AI regulatory conundrum: Speed vs. caution Regulators worldwide face the question of how to mandate as AI technology advances. Urgency is muted by a significant fact: the main dimensions of potential AI are not fully understood yet. “No one really knows yet,” Starzak said, highlighting the challenge of crafting regulations for a technology with unknown scope. A lack of knowledge has led to producing responsible AI development and deployment frameworks that are speculative. An example would be the AI risk framework set by the National Institute of Standards and Technology (NIST), which Starzak said were meaningful steps towards the goal. Voluntary guidelines provide companies with a roadmap for creating AI risk assessment measures and encourage them to do so without stifling innovation. The tightrope of global regulatory harmonisation Cloudflare is congnisant of the complexities of achieving regulatory harmony across different jurisdictions, particularly in data protection and privacy. Starzak used the EU’s General Data Protection Regulation (GDPR) to illustrate the benefits and challenges of sweeping regulatory frameworks. It is noteworthy that GDPR has a significant role in consolidating privacy norms internationally. Starzak said that its real-life application does not always harmonise with the functioning of the internet. “It doesn’t actually feel like the way the internet necessarily works in practice,” she said, referring to restrictions on data transfers between jurisdictions. This disconnect highlights a broader challenge: crafting regulations that protect consumers and national interests without impeding the global nature of the internet and digital commerce. Starzak emphasised the need for regulatory mechanisms that are “consistent across jurisdiction to jurisdiction, but enable information to travel.” The imperative of targeted, narrow actions Starzak advocates for a more nuanced, targeted approach to cybersecurity measures and content moderation. Her philosophy is rooted in recognising that broad, sweeping actions often have unintended consequences that can harm the ecosystem they aim to protect. In terms of cybersecurity, Starzak stressed the importance of proportionality. She drew a stark contrast between targeted actions, like removing a specific piece of content, and drastic measures, like complete internet shutdowns. “The narrower that you can go, the better off you’re going to be from an open internet standpoint,” she said. The principle extends to content moderation as well. As Starzak describes, the approach by Cloudflare involves carefully distinguishing between different types of services and their impacts. By doing so, the company aims to make more precise, effective decisions that address specific issues without unnecessarily compromising the broader internet ecosystem. Balancing innovation and regulation in AI The rapid advancement of AI technology presents a unique regulatory challenge. Starzak highlighted the risk of over-regulation stifling innovation and concentrating power in the hands of a few large players. “If you regulate it too much, you restrict the industry in a very significant way and make it really only available to a very small number of players,” she said. Starzak advocates a regulatory approach that encourages responsible innovation while addressing potential harms. This includes promoting the development and adoption of AI risk assessment frameworks and encouraging industry self-regulation through model testing and ‘red teaming.’ The path forward: collaboration and flexibility Starzak emphasises the need for ongoing dialogue and flexibility in regulatory approaches to AI and cybersecurity. She highlighted the importance of industry, government, and civil society collaboration to develop effective, balanced regulations. According to Starzak, the key is to focus on specific harms and consumer protection rather than broad, sweeping regulations. “You have to go in with a purpose,” she stated, urging regulators to understand and articulate the problems they’re trying to solve. A targeted approach, combined with willingness to adapt as technologies evolve offers a path forward through the complex internet and AI regulation world. As Cloudflare continues to navigate this landscape, Starzak’s insights provide a roadmap for balancing innovation, security, and responsible governance. As the tech industry and regulators grapple with the challenge of creating effective governance frameworks, Cloudflare’s approach emphasises targeted actions, global harmonisation efforts, and regulatory flexibility. It represents a thoughtful perspective in the dialogue between tech companies and policymakers. The way forward likely involves collaborative efforts from various stakeholders, including industry leaders, government bodies, and civil society organisations. The focus ******** on striking a balance between protecting users and fostering innovation. This goal requires ongoing adaptation and cooperation across the tech ecosystem. See also: Balancing innovation and trust: Experts assess the EU’s AI Act Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Regulations to help or hinder: Cloudflare’s take appeared first on AI News. View the full article
  19. It’s sometimes difficult to distinguish the reality of technology from the hype and marketing messages that bombard our inboxes daily. In just the last five years, we’ve probably heard too much about the metaverse, blockchain and virtual reality, for example. At present, we’re in the midst of a furore about the much-abused term ‘AI’, and time will tell whether this particular storm will be seen as a teacup resident. Artificial Intelligence News spoke exclusively to Jon McLoone, the Director of Technical Communication and Strategy at of one the most mature organisations in the computational intelligence and scientific innovation space, Wolfram Research, to help us put our present concepts of AI and their practical uses into a deeper context. Jon has worked at Wolfram Research for 32 years in various roles, currently leading the ********* Technical Services team. A mathematician by training and a skilled practitioner in many aspects of data analysis, we began our interview by having him describe Wolfram’s work in an elevator pitch format. Jon McLoone “Our value proposition is that we know computation and Wolfram technology. We tailor our technology to the problem that an organisation has. That’s across a broad range of things. So, we don’t have a typical customer. What they have in common is they’re doing something innovative.” “We’re doing problem-solving, the type of things that use computation and data science. We’re building out a unified platform for computation, and when we talk about computation, we mean the kinds of technical computing, like engineering calculations, data science and machine learning. It’s things like social network analysis, biosciences, actuarial science, and financial computations. Abstractly, these are all fundamentally mathematical things.” “Our world is all those structured areas where we’ve spent 30 years building out different ontologies. We have a symbolic representation of the maths, but also things like graphs and networks, documents, videos, images, audio, time series, entities in the real world, like cities, rivers, and mountains. My team is doing the fun stuff of actually making it do something useful!” “AI we just see as another kind of computation. There were different algorithms that have been developed over years, some of them hundreds of years ago, some of them only tens of years ago. Gen AI just adds to this list.” Claims made about AI in 2024 can sometimes be overoptimistic, so we need to be realistic about its capabilities and consider what it excels at and where it falls short. “There’s still human intelligence, which still ******** as the strategic element. You’re not going to say, in the next five years AI will run my company and make decisions. Generative AI is very fluent but is unreliable. Its job is to be plausible, not to be correct. And particularly when you get into the kinds of things Wolfram does, it’s terrible because it will tell you the kinds of things that your mathematical answer would look like.” (Artificial Intelligence News‘ italics.) The work of Wolfram Research in this context focuses on what Jon terms ‘symbolic AI’. To differentiate generative and symbolic AI, he gave us the analogy of modelling the trajectory of a thrown ball. A generative AI would learn how the ball travels by examining many thousands of such throws and then be able to produce a description of the trajectory. “That description would be plausible. That kind of model is data-rich, understanding poor.” A symbolic representation of the thrown ball, on the other hand, would involve differential equations for projectile motion and representations of elements: mass, viscosity of the atmosphere, friction, and many other factors. “It could then be asked, ‘What happens if I throw the ball on Mars?’ It’ll say something accurate. It’s not going to fail.” The ideal way to solve business (or scientific, medical, or engineering) problems is a combination of human intelligence, symbolic reasoning, as epitomised in Wolfram Language, and what we now term AI acting as the glue between them. AI is a great technology for interpreting meaning and acting as an interface between the component parts. “Some of the interesting crossovers are where we take natural language and turn that into some structured information that you can then compute with. Human language is very messy and ambiguous, and generative AI is very good at mapping that to some structure. Once you’re in a structured world of something that is syntactically formal, then you can do things on it.” A recent example of combining ‘traditional’ AI with the work of Wolfram involved medical records: “We did a project recently taking medical reports, which were handwritten, typed and digital. But they contain words, and trying to do statistics on those isn’t possible. And so, you’ve got to use the generative AI part for mapping all of these words to things like classes: was this an avoidable ******? Yes. No. That’s a nice, structured key value pair. And then once we’ve got that information in structured form (for example a piece of JSON or XML, or whatever your chosen structure), we can then do classical statistics to start saying, ‘Is there a trend? Can we project? Was there an impact from COVID on hospital harms?’ Clear-cut questions that you can approach symbolically with things like means and medians and models.” During our interview, Jon also gave a précis of a presentation, which took as its example of his organisation’s work, an imaginary peanut butter cup manufacturing plant. What might be the effects of changing out a particular ingredient or altering some detail of the recipe and the effects of that change on the product’s shelf life? “LLMs (large language models) will say, ‘Oh, they’ll probably last a few weeks because peanut butter cups usually sit on the shelf a few weeks. But going to a computational model that can plug into the ingredients, and compute, and you’ll know this thing should last for eight weeks before it goes off. Or what that change might do to the manufacturing process? A computational model can connect to the digital twin of your manufacturing plant and learn, ‘That will slow things down by 3%, so your productivity will fall by 20% because it creates a bottleneck here.’ LLMs are great at connecting you and your question to the model, maths, data science or the database. And that’s really an interesting three-way meeting of minds.” You can catch Wolfram Research at the upcoming TechEx event in Amsterdam, October 1-2, at stand 166 of the AI & Big Data strand. We can’t guarantee any peanut butter-related discussion at the event, but to discover how powerful modelling and generative AI can be harnessed to solve your specific problems and quandaries, contact the company via its website. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post How cold hard data science harnesses AI with Wolfram Research appeared first on AI News. View the full article
  20. As data management grows more complex and modern applications extend the capabilities of traditional approaches, AI is revolutionising application scaling. Han Heloir, EMEA gen AI senior solutions architect, MongoDB. In addition to freeing operators from outdated, inefficient methods that require careful supervision and extra resources, AI enables real-time, adaptive optimisation of application scaling. Ultimately, these benefits combine to enhance efficiency and reduce costs for targeted applications. With its predictive capabilities, AI ensures that applications scale efficiently, improving performance and resource allocation—marking a major advance over conventional methods. Ahead of AI & Big Data Expo Europe, Han Heloir, EMEA gen AI senior solutions architect at MongoDB, discusses the future of AI-powered applications and the role of scalable databases in supporting generative AI and enhancing business processes. AI News: As AI-powered applications continue to grow in complexity and scale, what do you see as the most significant trends shaping the future of database technology? Heloir: While enterprises are keen to leverage the transformational power of generative AI technologies, the reality is that building a robust, scalable technology foundation involves more than just choosing the right technologies. It’s about creating systems that can grow and adapt to the evolving demands of generative AI, demands that are changing quickly, some of which traditional IT infrastructure may not be able to support. That is the uncomfortable truth about the current situation. Today’s IT architectures are being overwhelmed by unprecedented data volumes generated from increasingly interconnected data sets. Traditional systems, designed for less intensive data exchanges, are currently unable to handle the massive, continuous data streams required for real-time AI responsiveness. They are also unprepared to manage the variety of data being generated. The generative AI ecosystem often comprises a complex set of technologies. Each layer of technology—from data sourcing to model deployment—increases functional depth and operational costs. Simplifying these technology stacks isn’t just about improving operational efficiency; it’s also a financial necessity. AI News: What are some key considerations for businesses when selecting a scalable database for AI-powered applications, especially those involving generative AI? Heloir: Businesses should prioritise flexibility, performance and future scalability. Here are a few key reasons: The variety and volume of data will continue to grow, requiring the database to handle diverse data types—structured, unstructured, and semi-structured—at scale. Selecting a database that can manage such variety without complex ETL processes is important. AI models often need access to real-time data for training and inference, so the database must offer low latency to enable real-time decision-making and responsiveness. As AI models grow and data volumes expand, databases must scale horizontally, to allow organisations to add capacity without significant downtime or performance degradation. Seamless integration with data science and machine learning tools is crucial, and native support for AI workflows—such as managing model data, training sets and inference data—can enhance operational efficiency. AI News: What are the common challenges organisations face when integrating AI into their operations, and how can scalable databases help address these issues? Heloir: There are a variety of challenges that organisations can run into when adopting AI. These include the massive amounts of data from a wide variety of sources that are required to build AI applications. Scaling these initiatives can also put strain on the existing IT infrastructure and once the models are built, they require continuous iteration and improvement. To make this easier, a database that scales can help simplify the management, storage and retrieval of diverse datasets. It offers elasticity, allowing businesses to handle fluctuating demands while sustaining performance and efficiency. Additionally, they accelerate time-to-market for AI-driven innovations by enabling rapid data ingestion and retrieval, facilitating faster experimentation. AI News: Could you provide examples of how collaborations between database providers and AI-focused companies have driven innovation in AI solutions? Heloir: Many businesses struggle to build generative AI applications because the technology evolves so quickly. Limited expertise and the increased complexity of integrating diverse components further complicate the process, slowing innovation and hindering the development of AI-driven solutions. One way we address these challenges is through our MongoDB AI Applications Program (MAAP), which provides customers with resources to assist them in putting AI applications into production. This includes reference architectures and an end-to-end technology stack that integrates with leading technology providers, professional services and a unified support system. MAAP categorises customers into four groups, ranging from those seeking advice and prototyping to those developing mission-critical AI applications and overcoming technical challenges. MongoDB’s MAAP enables faster, seamless development of generative AI applications, fostering creativity and reducing complexity. AI News: How does MongoDB approach the challenges of supporting AI-powered applications, particularly in industries that are rapidly adopting AI? Heloir: Ensuring you have the underlying infrastructure to build what you need is always one of the biggest challenges organisations face. To build AI-powered applications, the underlying database must be capable of running queries against rich, flexible data structures. With AI, data structures can become very complex. This is one of the biggest challenges organisations face when building AI-powered applications, and it’s precisely what MongoDB is designed to handle. We unify source data, metadata, operational data, vector data and generated data—all in one platform. AI News: What future developments in database technology do you anticipate, and how is MongoDB preparing to support the next generation of AI applications? Heloir: Our key values are the same today as they were when MongoDB initially launched: we want to make developers’ lives easier and help them drive business ROI. This ******** unchanged in the age of artificial intelligence. We will continue to listen to our customers, assist them in overcoming their biggest difficulties, and ensure that MongoDB has the features they require to develop the next [generation of] great applications. (Photo by Caspar Camille Rubin) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Han Heloir, MongoDB: The role of scalable databases in AI-powered apps appeared first on AI News. View the full article
  21. Ahead of AI & Big Data Expo Europe, AI News caught up with Ivo Everts, Senior Solutions Architect at Databricks, to discuss several key developments set to shape the future of open-source AI and data governance. One of Databricks’ notable achievements is the DBRX model, which set a new standard for open large language models (LLMs). “DBRX outperforms all other leading open-source AI models on standard benchmarks and has up to 2x faster inference than models like Llama2-70B,” Everts explains. “It was trained more efficiently due to a variety of technological advances. “From a quality standpoint, we believe that DBRX is the best open source model out there and when we refer to ‘best’ this means a wide range of industry benchmarks, including language understanding (MMLU), Programming (HumanEval), and Math (GSM8K).” The open-source AI model aims to “democratise the training of custom LLMs beyond a small handful of model providers and show organisations that they can train world-class LLMs on their data in a cost-effective way.” In line with their commitment to open ecosystems, Databricks has also open-sourced Unity Catalog. “Open-sourcing Unity Catalog enhances its adoption across cloud platforms (e.g., AWS, Azure) and on-premise infrastructures,” Everts notes. “This flexibility allows organisations to uniformly apply data governance policies regardless of where the data is stored or processed.” Unity Catalog addresses the challenges of data sprawl and inconsistent access controls through various features: Centralised data access management: “Unity Catalog centralises the governance of data assets, allowing organisations to manage access controls in a unified manner,” Everts states. Role-Based Access Control (RBAC): According to Everts, Unity Catalog “implements Role-Based Access Control (RBAC), allowing organisations to assign roles and permissions based on user profiles.” Data lineage and auditing: This feature “helps organisations monitor data usage and dependencies, making it easier to identify and eliminate redundant or outdated data,” Everts explains. He adds that it also “logs all data access and changes, providing a detailed audit trail to ensure compliance with data security policies.” Cross-cloud and hybrid support: Everts points out that Unity Catalog “is designed to manage data governance in multi-cloud and hybrid environments” and “ensures that data is governed uniformly, regardless of where it resides.” The company has introduced Databricks AI/***, a new business intelligence product that leverages generative AI to enhance data exploration and visualisation. Everts believes that “a truly intelligent *** solution needs to understand the unique semantics and nuances of a business to effectively answer questions for business users.” The AI/*** system includes two key components: Dashboards: Everts describes this as “an AI-powered, low-code interface for creating and distributing fast, interactive dashboards.” These include “standard *** features like visualisations, cross-filtering, and periodic reports without needing additional management services.” Genie: Everts explains this as “a conversational interface for addressing ad-hoc and follow-up questions through natural language.” He adds that it “learns from underlying data to generate adaptive visualisations and suggestions in response to user queries, improving over time through feedback and offering tools for analysts to refine its outputs.” Everts states that Databricks AI/*** is designed to provide “a deep understanding of your data’s semantics, enabling self-service data analysis for everyone in an organisation.” He notes it’s powered by “a compound AI system that continuously learns from usage across an organisation’s entire data stack, including ETL pipelines, lineage, and other queries.” Databricks also unveiled Mosaic AI, which Everts describes as “a comprehensive platform for building, deploying, and managing machine learning and generative AI applications, integrating enterprise data for enhanced performance and governance.” Mosaic AI offers several key components, which Everts outlines: Unified tooling: Provides “tools for building, deploying, evaluating, and governing AI and ML solutions, supporting predictive models and generative AI applications.” Generative AI patterns: “Supports prompt engineering, retrieval augmented generation (RAG), fine-tuning, and pre-training, offering flexibility as business needs evolve.” Centralised model management: “Model Serving allows for centralised deployment, governance, and querying of AI models, including custom ML models and foundation models.” Monitoring and governance: “Lakehouse Monitoring and Unity Catalog ensure comprehensive monitoring, governance, and lineage tracking across the AI lifecycle.” Cost-effective custom LLMs: “Enables training and serving custom large language models at significantly lower costs, tailored to specific organisational domains.” Everts highlights that Mosaic AI’s approach to fine-tuning and customising foundation models includes unique features like “fast startup times” by “utilising in-cluster base model caching,” “live prompt evaluation” where users can “track how the model’s responses change throughout the training process,” and support for “custom pre-trained checkpoints.” At the heart of these innovations ***** the Data Intelligence Platform, which Everts says “transforms data management by using AI models to gain deep insights into the semantics of enterprise data.” The platform combines features of data lakes and data warehouses, utilises Delta Lake technology for real-time data processing, and incorporates Delta Sharing for secure data exchange across organisational boundaries. Everts explains that the Data Intelligence Platform plays a crucial role in supporting new AI and data-sharing initiatives by providing: A unified data and AI platform that “combines the features of data lakes and data warehouses into a single architecture.” Delta Lake for real-time data processing, ensuring “reliable data governance, ACID transactions, and real-time data processing.” Collaboration and data sharing via Delta Sharing, enabling “secure and open data sharing across organisational boundaries.” Integrated support for machine learning and AI model development with popular libraries like MLflow, PyTorch, and TensorFlow. Scalability and performance through its cloud-native architecture and the Photon engine, “an optimised query ********** engine.” As a key sponsor of AI & Big Data Expo Europe, Databricks plans to showcase their open-source AI and data governance solutions during the event. “At our stand, we will also showcase how to create and deploy – with Lakehouse apps – a custom GenAI app from scratch using open-source models from Hugging Face and data from Unity Catalog,” says Everts. “With our GenAI app you can generate your own cartoon picture, all running on the Data Intelligence Platform.” Databricks will be sharing more of their expertise at this year’s AI & Big Data Expo Europe. Swing by Databricks’ booth at stand #280 to hear more about open AI and improving data governance. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ivo Everts, Databricks: Enhancing open-source AI and improving data governance appeared first on AI News. View the full article
  22. BMC Software’s director of solutions marketing, Basil Faruqui, discusses the importance of DataOps, data orchestration, and the role of AI in optimising complex workflow automation for business success. What have been the latest developments at BMC? It’s exciting times at BMC and particularly our Control-M product line, as we are continuing to help some of the largest companies around the world in automating and orchestrating business outcomes that are dependent on complex workflows. A big focus of our strategy has been on DataOps specifically on orchestration within the DataOps practice. During the last twelve months we have delivered over seventy integrations to serverless and PaaS offerings across AWS, Azure and GCP enabling our customers to rapidly bring modern cloud services into their Control-M orchestration patterns. Plus, we are prototyping GenAI based use cases to accelerate workflow development and run-time optimisation. What are the latest trends you’ve noticed developing in DataOps? What we are seeing in the Data world in general is continued investment in data and analytics software. Analysts estimate that the spend on Data and Analytics software last year was in the $100 billion plus range. If we look at the Machine Learning, Artificial Intelligence & Data Landscape that Matt Turck at Firstmark publishes every year, its more crowded than ever before. It has 2,011 logos and over five hundred were added since 2023. Given this rapid growth of tools and investment, DataOps is now taking center stage as companies are realising that to successfully operationalise data initiatives, they can no longer just add more engineers. DataOps practices are now becoming the blueprint for scaling these initiatives in production. The recent ***** of GenAI is going make this operational model even more important. What should companies be mindful of when trying to create a data strategy? As I mentioned earlier that the investment in data initiatives from business executives, CEOs, CMOs, CFOs etc. continues to be strong. This investment is not just for creating incremental efficiencies but for game changing, transformational business outcomes as well. This means that three things become very important. First is clear alignment of the data strategy with the business goals, making sure the technology teams are working on what matters the most to the business. Second, is data quality and accessibility, the quality of the data is critical. Poor data quality will lead to inaccurate insights. Equally important is ensuring data accessibility – making the right data available to the right people at the right time. Democratising data access, while maintaining appropriate controls, empowers teams across the organisation to make data-driven decisions. Third is achieving scale in production. The strategy must ensure that Ops readiness is baked into the data engineering practices so its not something that gets considered after piloting only. How important is data orchestration as part of a company’s overall strategy? Data Orchestration is arguably the most important pillar of DataOps. Most organisations have data spread across multiple systems – cloud, on-premises, legacy databases, and third-party applications. The ability to integrate and orchestrate these disparate data sources into a unified system is critical. Proper data orchestration ensures seamless data flow between systems, minimising duplication, latency, and bottlenecks, while supporting timely decision-making. What do your customers tell you are their biggest difficulties when it comes to data orchestration? Organisations continue to face the challenge of delivering data products fast and then scaling quickly in production. GenAI is a good example of this. CEOs and boards around the world are asking for quick results as they sense that this could majorly disrupt those who cannot harness its power. GenAI is mainstreaming practices such as prompt engineering, prompt chaining etc. The challenge is how do we take LLMs and vector databases, bots etc and fit them into the larger data pipeline which traverses a very hybrid architecture from multiple-clouds to on-prem including mainframes for many. This just reiterates the need for a strategic approach to orchestration which would allow folding new technologies and practices for scalable automation of data pipelines. One customer described Control-M as a power strip of orchestration where they can plug in new technologies and patterns as they emerge without having to rewire every time they swap older technologies for newer ones. What are your top tips for ensuring optimum data orchestration? There can be a number of top tips but I will focus on one, interoperability between application and data workflows which I believe is critical for achieving scale and speed in production. Orchestrating data pipelines is important, but it is vital to keep in mind that these pipelines are part of a larger ecosystem in the enterprise. Let’s consider an ML pipeline is deployed to predict the customers that are likely to switch to a competitor. The data that comes into such a pipeline is a result of workflows that ran in the ERP/CRM and combination of other applications. Successful completion of the application workflows is often a pre-requisite to triggering the data workflows. Once the model identifies customers that are likely to switch, the next step perhaps is to send them a promotional offer which means that we will need to go back to the application layer in the ERP and CRM. Control-M is uniquely positioned to solve this challenge as our customers use it to orchestrate and manage intricate dependencies between the application and the data layer. What do you see as being the main opportunities and challenges when deploying AI? AI and specifically GenAI is rapidly increasing the technologies involved in the data ecosystem. Lots of new models, vector databases and new automation patterns around prompt chaining etc. This challenge is not new to the data world, but the pace of change is picking up. From an orchestration perspective we see tremendous opportunities with our customers because we offer a highly adaptable platform for orchestration where they can fold these tools and patterns into their existing workflows versus going back to drawing board. Do you have any case studies you could share with us of companies successfully utilising AI? Domino’s Pizza leverages Control-M for orchestrating its vast and complex data pipelines. With over 20,000 stores globally, Domino’s manages more than 3,000 data pipelines that funnel data from diverse sources such as internal supply chain systems, sales data, and third-party integrations. This data from applications needs to go through complex transformation patterns and models before its available for driving decisions related to food quality, customer satisfaction, and operational efficiency across its franchise network. Control-M plays a crucial role in orchestrating these data workflows, ensuring seamless integration across a wide range of technologies like MicroStrategy, AMQ, Apache Kafka, Confluent, GreenPlum, Couchbase, Talend, SQL Server, and Power ***, to name a few. Beyond just connecting complex orchestration patterns together Control-M provides them with end-to-end visibility of pipelines, ensuring that they meet strict service-level agreements (SLAs) while handling increasing data volumes. Control-M is helping them generate critical reports faster, deliver insights to franchisees, and scale the roll out new business services. What can we expect from BMC in the year ahead? Our strategy for Control-M at BMC will stay focused on a couple of basic principles: Continue to allow our customers to use Control-M as a single point of control for orchestration as they onboard modern technologies, particularly on the public cloud. This means we will continue to provide new integrations to all major public cloud providers to ensure they can use Control-M to orchestrate workflows across three major cloud infrastructure models of IaaS, Containers and PaaS (Serverless Cloud Services). We plan to continue our strong focus on serverless, and you will see more out-of-the-box integrations from Control-M to support the PaaS model. We recognise that enterprise orchestration is a team sport, which involves coordination across engineering, operations and business users. And, with this in mind, we plan to bring a user experience and interface that is persona based so that collaboration is frictionless. Specifically, within DataOps we are looking at the intersection of orchestration and data quality with a specific focus on making data quality a first-class citizen within application and data workflows. Stay tuned for more on this front! Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Basil Faruqui, BMC Software: How to nail your data and AI strategy appeared first on AI News. View the full article
  23. Healthcare documentation is an integral part of the sector that ensures the delivery of high-quality care and maintains the continuity of patient information. However, as healthcare providers have to deal with excessive amounts of data, managing it can feel overwhelming. With the advent of intelligent document processing technology, a new solution can now be implemented. This article explores how such technology works, its role in healthcare documentation, and its benefits, limitations, and implications for the future. Intelligent document processing and its importance Intelligent document processing is a more advanced type of automation based on AI technology, machine learning, natural language processing, and optical character recognition to collect, process, and organise data from multiple forms of paperwork. Unlike traditional document systems, IDP can handle unstructured and semi-structured data for multiple healthcare documents, which can exist in various forms. As such data is based on advanced, permanent algorithms and artificial intelligence tools, IDP can enhance the functions of healthcare providers and assist them in the care delivery process. IDP’s role in healthcare documentation Multiple forms of documents, like health, employment, or insurance records, reports, notes, forms, and social documents, have to be dealt with by multiple providers daily. IDP can reduce the need for inefficient data management processes through: Automating the data extraction process by automatically capturing the essential information from the documents. Thus, it reduces the human factor and enhance performance, Establishing more accurate data With AI algorithms. IDP ensures that the data captured is accurate and consistent; crucial for patient safety and care quality, Organising data in a searchable format to allow better data access. Ensuring compliance with regulations like HIPAA by securely managing sensitive patient data and providing audit trails. Benefits of IDP in healthcare The implementation of IDP in healthcare comes with several benefits: Increased efficiency: By automating routine tasks, healthcare providers can focus more on patient care rather than paperwork, Cost reduction: IDP reduces the need for manual data entry and paper-based processes, leading to significant cost savings, Better patient experience: Quick access to patient history and records leads to more informed decision-making and personalised care, Scalability: As healthcare facilities grow, IDP systems can easily scale to manage increased data volumes without compromising performance. Challenges in implementing IDP While IDP offers many advantages, there are challenges to its adoption: Integration with existing systems: Integrating IDP with current healthcare IT ecosystems can be complex and requires careful planning, Data privacy concerns: Protecting patient data is paramount, and IDP must adhere to stringent security standards, Change management: Staff may resist shifting from manual to automated processes, necessitating adequate training and change management strategies. Future of IDP in healthcare In the future, IDP is likely to increase its impact in the healthcare field. Given the rise of AI and machine learning, the corresponding systems will become increasingly sophisticated, likely providing predictive analytics and decision support services. This could help improve diagnostic precision and create a more personalised patient treatment plan, eventually leading to better outcomes. In addition, IDP may facilitate data exchange between different healthcare systems. Conclusion Intelligent document processing is a typical solution that is bound to become increasingly impactful in healthcare. It may help healthcare professionals deal more effectively with the contemporary challenges of patient data. Although challenges exist, the potential results of improved client care, decreased expenses, and more precise data make IDP an invaluable asset. Thus, it can be concluded that Intelligent Document Processing should be considered one of the healthcare industry’s future solutions in its quest toward digitalisation. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Enhancing healthcare documentation with IDP appeared first on AI News. View the full article
  24. Staying competitive in modern sales today effectively means embracing the latest trends in tech. Since late 2022 – when generative AI made its way to the public’s consciousness thanks to OpenAI’s ChatGPT – AI has been at the forefront of this shift, changing the way sales teams (like most other teams) operate and connect with clients. In this blog post, let’s ***** into how AI is streamlining sales activities and helping boost conversion rates. Here are the top five ways sales teams can use AI to better personalise interactions, automate admin work, and more, proving that it’s not just about cutting costs but transforming how sales are done. 1. Personalised engagement A typical sales cycle is complex, involving multiple touchpoints and interactions before conversion. Deeper personalisation involves understanding a prospect’s business needs, challenges, and industry trends. AI tools are particularly adept at sifting through large datasets to uncover insights that tailor interactions to these specific business contexts. For instance, AI can analyse past interactions, like email exchanges and engagement history, to determine what type of content or product features are most relevant to a specific client. This allows sales teams to offer solutions that are not just generic services or products but are customised to address the client’s unique challenges and goals. AI can enhance account-based marketing (ABM) strategies by enabling sales teams to create highly personalised content strategies for each account. By analysing data from various touchpoints in the quote to cash process, AI helps in crafting messages that resonate deeply with each decision-maker in the client’s organisation. This targeted approach not only strengthens relationships but also significantly increases the likelihood of closing deals. 2. Sales forecasting Accurate sales forecasting is vital in B2B sales, where strategic planning and resource allocation depend heavily on predicted sales outcomes. AI significantly enhances the accuracy and reliability of these forecasts by analysing vast amounts of data and identifying trends that are hard to spot. AI-driven pipeline forecasting tools use historical sales data, market conditions, and real-time sales activity to predict future sales performance. These tools employ predictive analytics to model various scenarios and their potential impacts on sales, helping sales teams to prepare more effectively for future market movements. Moreover, AI-enhanced forecasting tools can dynamically update predictions based on new data. This means that sales forecasts are not static but evolve as more interaction and transaction data becomes available. Such dynamic forecasting ensures that sales strategies remain agile and responsive to changes, increasing the overall efficiency of sales operations. By leveraging AI for advanced sales forecasting, B2B companies can not only forecast with greater accuracy but also gain strategic insights that can lead to a more proactive approach in managing sales pipelines and customer relationships. 3. Dynamic pricing Dynamic pricing is an advanced AI application that can significantly boost B2B sales performance by optimising pricing strategies based on real-time market data and customer behaviour. This technology allows companies to adjust their pricing models swiftly in response to changes in the market or customer demand, ensuring competitiveness and maximising revenue. AI tools like Competera analyse historical sales data, market dynamics, competitor pricing, and customer patterns to recommend the most effective pricing strategies for various products and services. For instance, it can suggest special discounts for high-value clients or adjust prices during peak demand periods to capitalise on market trends. AI-driven dynamic pricing can enhance customer satisfaction by offering fair prices that reflect the current value of the products or services, which can differ across customer segments or even individual clients based on their purchase history and loyalty. By integrating dynamic pricing models powered by AI, sales teams not only streamline their pricing strategies but also ensure that they are adaptable, data-driven, and closely aligned with both market conditions and customer expectations. For B2B companies aiming to refine their pricing and sales strategies, an AI consulting service is a crucial edge. By engaging advanced data analytics and AI/ML expertise, these services enhance data-driven decision-making, improve customer relationships, and accelerate sales cycles, fostering a more competitive and efficient sales process. 4. Lead scoring and prioritisation When you have a healthy influx of leads, efficiently managing them is crucial. Sales teams can use AI to dramatically enhance this process through sophisticated lead scoring systems, which assess and rank prospects based on their likelihood to convert. This prioritisation ensures that sales teams focus their efforts on the most promising leads, optimising both time and resources. AI tools integrate various data points like past interactions, engagement levels, company size, and industry-specific behaviours to create a comprehensive profile of each lead. AI algorithms can examine historical data to recognise patterns that indicate a high probability of conversion. This might include the frequency of communications, the types of questions asked by the prospect, or their engagement with specific content. For example, Salesforce Einstein uses machine learning to continuously refine its scoring model based on new data, making the lead qualification process more dynamic and accurate. By automating the identification of high-potential leads, sales teams can allocate more time to crafting personalised outreach strategies that are more likely to resonate with top-tier prospects. Moreover, AI-powered lead scoring can alert sales teams to changes in a lead’s score in real-time. This means that if a prospect’s engagement level increases due to a recent interaction or a change in their business needs, the sales team can immediately capitalise on this opportunity, increasing the chances of a successful *****. So, by leveraging AI for lead scoring and prioritisation, sales teams can ensure they are not just reaching out to more leads, but are reaching out to the right leads at the right time. 5. Automating administrative tasks AI’s prowess to automate administrative tasks is a game changer in B2B sales, where efficiency and time management are critical. By taking over routine tasks, AI allows sales teams to dedicate more energy and focus to engaging with clients and closing deals. For instance, AI-powered CRM tools can handle data entry, manage email sequences, schedule meetings, and update logs with new client information. This automation streamlines the sales process, reducing the administrative burden and minimising the potential for human error. AI-driven automation extends to crafting and sending follow-up emails. AI can analyse the interaction history with each client to determine the most effective follow-up strategy, tailoring messages based on the client’s previous responses and engagement level. This personalised approach ensures that communications are relevant and timely, thereby increasing the likelihood of maintaining the client’s interest and pushing the sales process forward. And, AI can offer predictive insights about the best times to contact clients or send out proposals, based on data patterns that include client availability and response rates. This predictive capability ensures that sales efforts are not just systematic but also strategically timed, maximising the impact of each interaction. By leveraging AI to automate these essential but repetitive tasks, B2B sales teams can significantly improve their productivity and effectiveness, allowing them to focus on what they do best – building relationships and closing sales. Wrapping up The integration of AI tools in modern sales processes brings efficiency and effectiveness, allowing sales teams to focus on strategic aspects of sales like relationship building and closing high-value deals. Teams that embrace AI can expect not only increased conversion rates but also more responsive sales ops that can adapt quickly to market changes and customer needs. All in all, companies that welcome ongoing adaptation and investment in AI tools will be well-positioned to lead in their industries, leveraging AI not just as a tool, but as a core component of their sales strategy. (Image Source: Freepik) The post How sales teams can use AI today to optimise conversions appeared first on AI News. View the full article
  25. SS&C Blue Prism’s VP of sales for the ***, Ireland and Benelux, Mark Lockett, discusses the firm’s latest developments, customer challenges and how to get the most out of intelligent automation tools. Can you tell us a little bit about SS&C Blue Prism and what it does? SS&C Blue Prism is a specialist in the field of Intelligent Automation, providing products and solutions that change the way in which our customers deliver the work they undertake. We talk about automation augmenting the workforce, and we can do that by using a digital workforce that brings additional capacity to your human workforce. The rationale being we get a digital worker to do those repetitive, high volume, low value added tasks, and we then allow your employees to focus on the value add that they can bring. Intelligent Automation is really looking at the whole cycle of how to deliver the required work through the most efficient channel. That could include orchestration using business process management capabilities. It could also look at process identification through Blue Prism Process Intelligence technologies, where we’re trying to identify those tasks that lend themselves to be automated by technology. The dual effect of automation and orchestration of tasks that customers have to do day in, day out is where SS&C Blue Prism brings most value to its customers. A digital workforce could be aimed at improving an HR onboarding process, improving your finance ******* end close process or transferring information from an outpatient system to an electronic patient record system and vice versa.The use cases are many and varied but the principle ******** the same; use the right channel to deliver the work effort. The beauty of a digital workforce comes in the ability to flex work demands as and when necessary. What have been the latest developments at the company? We’ve been putting a lot of our time, effort and resources into our Next Gen platform. That’s our cloud-native platform that provides access to intelligent automation capabilities, delivered in a way that suits our customers best. It helps customers enjoy the benefits of the cloud while keeping the work where it needs to be. With this hybrid deployment, Next Gen allows customers to take advantage of using the cloud, while having a self-hosted digital workforce that operates behind the customer’s firewall, on their own secure infrastructure – meaning no sensitive data leaves their network. For many customers that operate in highly regulated industries, that really does drive the opportunity for us to enhance the way we can deliver that through the Next Gen platform. And Next Gen also brings together, in a single repository, all the capabilities that allow us to improve the business processes that we’re undertaking on behalf of our customers. Also, I think we’d have been living under a rock if we hadn’t appreciated the fact that Gen AI is really where the market is pivoting. We’re heavily looking into understanding how we can use that technology to really change the way that we work. We’ve introduced capabilities that allow us to integrate with a variety of large language models so our customers can adopt Gen AI. And the way in which we consider that is by using this concept that Gen AI is the capability, which is effectively the brain that allows you to have the emotional, considered response, and the digital workers are the arms and legs that deliver that work. So the brain, the Gen AI, does the thinking, and then the digital workforce does the actual doing. When Gen AI is wrapped into Intelligent Automation processes, it means it’s fully auditable and secure. With many customers hesitant to fully ***** into using Gen AI due to security concerns, the combination is compelling. That’s something that our customers are really excited about in terms of driving use of Gen AI. And we’re seeing that in a number of places now where we’re looking at Gen AI to manage those customer facing interactions, manage those employee interactions, manage those supplier interactions. They have that ability to respond to any of those queries through a variety of channels, be that telephone, email or chat capability, then Gen AI can pick up and author the response, ********* by the automation platform. I speak to a lot of end users and the main thing they say about AI, because it’s so topical right now, is they think they should be utilising it. The problem for many though, is they don’t know how and why. They’re worried that they’re going to be left behind if they don’t get on board with it but maybe it’s not even suitable for them. I couldn’t agree more. I think for a lot of our customers, and a lot of customer conversations you have, there is this view that we’ve got to do something. We’ve got to have a budget. And invariably there are budgets around for Gen AI. A lot of that is in pilot phase right now. And if you look at some of the evidence in support of it, they haven’t necessarily gone that well. Part of the problem is that for many they are actually considering Gen AI without thinking of the business problem that they’re trying to solve. We know we’ve got this new shiny bit of kit and that we should be using it. How to use it and what to do with it is almost a secondary consideration. The conversation that we really try to move to with the customer is ‘what is the problem that you’re trying to solve? What is the customer issue that you’re trying to solve?’ And we’re certainly seeing that through three main lenses in terms of that use case for Gen AI. The customer interaction, the employee interaction, or the citizen interaction, if it’s a member of the public. We’re seeing some really interesting things right now about how we are supporting our Gen AI partners, because most of what we are doing is facilitating the use of a third party large language model. We are effectively providing the framework by which our partners can interact with the customer and solve the customer problem. What kind of trends have you seen developing in Intelligent Automation recently? There are a number of things that our customers talk to us about. One of the things we’ve already spoken about, and that is this notion of Gen AI. We’ve got to do it. What are we going to do? How are we going to do it? We need to use Gen AI, and we need to automate it. And there are a number of pilot initiatives that we see because of that. There’s been so much hype around the business value of Gen AI that I think it’s quite scary for some. There was a recent industry report by McKinsey that talked about a $4.4 trillion market opportunity with Gen AI. There are some absolutely unbelievable numbers that are thrown out about that. I think the reality of that is slightly more considered. And I think it’s not just about how we can change the way we work. It’s really about how can I get a better outcome for the stakeholder, whomever that may be, by deploying Gen AI with automation? So that’s one of the first trends. The second thing that’s really interesting is our customers that have adopted process automation. They’ve used digital workers to either reduce existing costs or improve productivity. So they’ve used it initially as an opportunity for maybe a bit of cost control around improving and automating some processes. But that now is taking them to the next level, which is looking at how to use process intelligence to really identify further process enhancements that they can make. We’re talking about targeting huge organisational objectives through the use of Intelligent Automation, such as growth, customer satisfaction, or employee satisfaction, to name just a few. I think many companies have taken the low hanging fruit by automation, and now they are investing in those technologies around process identification so they can actually be sure that what they’re automating are the right things and delivering value. But are we? Are we leaving things uncovered by not using the process intelligence in support of the business operation? That is becoming more of a story that our customers are really getting into, and we’ve had a number of deployments where customers have done those initial automation activities, and are now looking to take it to the next level. The third thing we see more of is this co-existence with Microsoft Power applications. We’re seeing customers adopting those capabilities alongside technologies such as ours, and actually coexisting together in support. We see that more and more, and I think that’s a trend that many customers recognise in terms of the way that they’re working. It’s not just a one size fits all approach. What is the most appropriate technology? What are your customers biggest challenges? And how can Intelligent Automation help them deal with those? The number one challenge is cost control. How do we manage in a market of rising prices? How do we make sure that we’re getting value for money from the automation? We continue to advocate and demonstrate the value that automation is bringing. Be really structured in terms of how you are assessing the benefit that the automation is bringing, because you are accounting for that spend, you’ve got to prove that it’s worthwhile. For example, what’s the impact on FTE savings? What’s the volume of automations that I’m delivering? What’s the average cost of an employee that’s doing that work? Multiply one by the other and that’s my FTE saving that goes into the business case. So actual cost control, but measured in the term of the business efficiency that I get as a consequence of it. But, where the magic happens is being able to demonstrate what those extra hours have enabled you to do. Have you been able to launch better, quicker products? Have you improved employee satisfaction? Cost factors are always important, but customers must look beyond this to make full use of automation. Many, if not most, of our customers have their own centres of excellence that need to be able to demonstrate a value to the business. So that’s the number one conversation we get with our customers. How do we continue to justify the investment in the technology? What advice would you give to any companies thinking about implementing Intelligent Automation? For any customer considering introducing Intelligent Automation, what is the problem that you’re looking to solve? That’s the crux of the matter. And often you find that customers will look to technologies such as ours, where they know they have a challenge with existing technology estate. They’ve got a high degree of technology debt in their IT estate, and one of the ways that they can overcome some of those limitations is by adopting Intelligent Automation. So think about the problem that you’re trying to solve, and in order to do that, we need to get a really good understanding of what the actual business processes look like. Or, more importantly, what you think those business processes look like, because often what you think they look like and what they actually look like are very different. That’s where things like process intelligence come in to support that. So what is the problem that you’re looking to solve? The next thing that needs to be considered is how do you plan to support that moving forward? Because where our customers have continual investment in the technology and the development of the solution capability, they need to then start being advocates for automation technologies within the business. And once you are doing that, then you are the ones that are effectively going to other parts of the business and trying to identify those automation use cases. Our really successful customers are the ones that have got an internal champion who is going out to other parts of the business, because for many areas of the business, this is quite a well kept secret. So helping people understand what this technology can deliver by way of automation and streamlining process, and improvement of process because it’s not that widely understood. We often find that when employees realise what benefits it brings to their team, demand for those internal champions becomes huge. For some people, this notion of Intelligent Automation with digital workers has got this sort of Metal Mickey ******-type notion, and we’re not talking about that at all. You’re talking about using computers to emulate human interactions and, using Gen AI, they’re then emulating the human interaction that goes with it. So it becomes really quite powerful, but you’ve got to think about how you’re going to sustain that. What does a centre of excellence look like? What have I got by way of developers that can write the automations? What have I then got, by way of business analysts, that can then help us support and find the automations that we need? Think about what the initial use cases could look like. A business case on the whole is very easy to write. Where the challenge comes is how do I then sustain and grow the automation footprint? And the customers that are doing it really successfully are either partnering with someone who continues to deliver that function for them, or they’re bringing together their own centre of excellence in house, and they are then tasked with being the champions for further deployment. What plans does SS&C Blue Prism have for the year ahead? It’s something we’ve already touched upon. We are absolutely focused on transitioning our customers to the Next Gen capability, and embracing the technology opportunity that comes with that is something that customers have really input into the the development roadmap for the technology, and how we are moving with that technology. Our customers are really looking at when is the optimum opportunity for them to deploy Next Gen. That’s going to be a focus in the short to medium term. And the benefit that offers to our customers is really exciting, particularly when you’re talking about a global customer, where they have operations in a variety of geographies. And actually by having that central automation capability you can deploy the actual workers within each of the regions. That gives you a real step change in terms of the efficiency of automation and the ease by which you monitor and manage those automations as well. And then, as others are also encountering, the whole value that Gen AI brings, again, we have got a lot of focus on that. We’ve got a number of customers that are doing some really interesting things. We’ve just been successful with a customer project – a public sector body that is looking at the way they transform the citizen experience – and Gen AI has a huge part to play in that. We see that as something that will continue to improve over time. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Mark Lockett, SS&C Blue Prism: Enhancing human capabilities with digital workforces appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.