Normal view

There are new articles available, click to refresh the page.
Yesterday — 29 November 2023Main stream

AWS and NVIDIA expand partnership to advance generative AI

By: Ryan Daws
29 November 2023 at 14:30

Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations.

The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI software, along with AWS technologies such as Nitro System advanced virtualisation, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability.

Key highlights of the expanded collaboration include:

  1. Introduction of NVIDIA GH200 Grace Hopper Superchips on AWS:
    • AWS becomes the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology.
    • The NVIDIA GH200 NVL32 multi-node platform enables joint customers to scale to thousands of GH200 Superchips, providing supercomputer-class performance.
  2. Hosting NVIDIA DGX Cloud on AWS:
    • Collaboration to host NVIDIA DGX Cloud, an AI-training-as-a-service, on AWS, featuring GH200 NVL32 for accelerated training of generative AI and large language models.
  3. Project Ceiba supercomputer:
    • Collaboration on Project Ceiba, aiming to design the world’s fastest GPU-powered AI supercomputer with 16,384 NVIDIA GH200 Superchips and processing capability of 65 exaflops.
  4. Introduction of new Amazon EC2 instances:
    • AWS introduces three new Amazon EC2 instances, including P5e instances powered by NVIDIA H200 Tensor Core GPUs for large-scale generative AI and HPC workloads.
  5. Software innovations:
    • NVIDIA introduces software on AWS, such as NeMo Retriever microservice for chatbots and summarisation tools, and BioNeMo to speed up drug discovery for pharmaceutical companies.

This collaboration signifies a joint commitment to advancing the field of generative AI, offering customers access to cutting-edge technologies and resources.

Internally, Amazon robotics and fulfilment teams already employ NVIDIA’s Omniverse platform to optimise warehouses in virtual environments first before real-world deployment.

The integration of NVIDIA and AWS technologies will accelerate the development, training, and inference of large language models and generative AI applications across various industries.

(Photo by ANIRUDH on Unsplash)

See also: Inflection-2 beats Google’s PaLM 2 across common benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

Before yesterdayMain stream

QSimulate Closes a $2.5 Million Financing Round to Boost Businesses in Europe and Asia

27 November 2023 at 11:53
QSimulate

Insider Brief

  • Quantum Simulation Technologies, Inc. — QSimulate — announces that it has closed a $2.5M financing round led by quantum technology investment firm 2xN.
  • The financing round was led by financing round led by quantum technology investment firm 2xN.
  • Funds will support QSimulate’s quantum physics-based drug-discovery platform.
  • Image: QSimulate

PRESS RELEASE — Quantum Simulation Technologies, Inc. — QSimulate — announces that it has closed a $2.5 million financing round led by quantum technology investment firm 2xN. The other investors in this round are UTokyo IPC and Kyoto iCAP. The proceeds will support QSimulate’s rapidly expanding business centered on its quantum physics-based drug-discovery platform, QSP Life. QSP Life currently includes QUELO, QuValent, and QuantumFP, spanning small-molecule lead optimization to covalent inhibitor design, and to ultra-high-throughput molecular fingerprinting.

Powerful quantum predictions today

QSimulate uses proprietary quantum physics-based algorithms to faithfully predict answers to large-scale biological problems. QSimulate’s technology in its products, such as QUELO and QuValent, have enabled the first quantitative application of quantum mechanics to drug design, providing predictions with unprecedented fidelity, and opening up the computational study of new therapeutic classes. Through quantum-inspired representations, QSimulate’s quantum engine scales to thousands of atoms and simulates the dynamical processes that govern biological and drug interactions.

The foundation for a quantum future

QSimulate offers a foundational technology for future quantum hardware. In a multi-year partnership with Google Quantum AI (see details in a recent Google Research blog article), QSimulate has played a key role in the development of fault-tolerant quantum computing algorithms for chemical, material, and biomolecular problems. These contributions provide a roadmap for algorithm design in the quantum future supported by the existing QSimulate technologies.

Quantum towards digital molecular discovery

QSimulate’s strategic developments position the company for the digital discovery era. Through the incorporation of physics-based AI, QSimulate’s learning models discriminate between AI truth and AI hallucinations in molecular design. In combination with QSimulate’s existing quantum simulation innovations, QSimulate is building the technology platform of the digital molecular discovery era.

Niels Nielsen, co-founder of 2xN commented: “We are thrilled to lead the funding round and forge a partnership with QSimulate. Our strategy at 2xN is to back scientists and entrepreneurs who are world leaders in their field and QSimulate is a great example of that.

QSimulate stands at the forefront of revolutionizing drug design and material science, and we’re convinced they’re only scratching the surface with their already state-of-the-art QM-based simulation methods. With quantum computing on the rise, the interplay between classical and quantum computing will define the future of computation. QSimulate is well-positioned to benefit from this, having on board both quantum and classical simulation experts. The ongoing collaborations with JSR Corporation and Google Quantum AI are a testament to QSimulate’s pioneering position in harnessing quantum mechanics for drug discovery and materials innovation, setting a new industry benchmark.

Our investment in QSimulate is not merely a fiscal alliance; it’s an expedition into a quantum-imbued future teeming with endless scientific and industrial revelations. The quantum horizon is vibrant, and with QSimulate, we’re not just gazing at it; we’re sailing towards it at full steam!”

Is Quantum Artificial Intelligence Close? Understanding The Challenges of Quantum AI

23 November 2023 at 14:54
Employee using AI computing simulation

A recent Forbes article got the quantum community’s dander up — and I’m not even sure what dander is.

The article — Quantum Artificial Intelligence Is Closer Than You Think — claims that quantum AI is imminent and it’s transformative power will soon be realized.

While it’s important to maintain enthusiasm and it’s completely understandable to be excited about the possibilities of quantum AI, timelines — short or long — are historically problematic to make about scientific progress, particularly progress on AI — and forget about predicting progress on quantum AI.

We’ll try to break down the argument about quantum AI’s imminent arrival with some real challenges that could temper the “closer than your think” prediction.

First, the pace of AI advancement, while impressive, is not solely contingent on processing power. AI also requires vast amounts of data for training, and the development of algorithms that can leverage quantum computing is still in its infancy. The notion that AI will be ‘supercharged’ by quantum computing presupposes that quantum computers will soon be capable of running these algorithms efficiently, which is currently not the case.

Further, quantum computers excel at solving particular types of problems, but they are not universally superior — nor are they expected to be — to classical computers for all tasks. Therefore, the transformative impact of quantum computing on AI may be more nuanced and specialized than the broad revolution implied.

Maybe generative AI has triggered some of this excitement. Indeed, generative AI has absolutely demonstrated remarkable capabilities, but its practical applications are still being explored and understood. The history of technology is littered with examples of innovations that promised to revolutionize the world but instead found a more modest place within it. This is not to understate the potential of quantum AI, but to acknowledge that its integration into the fabric of society and business often takes longer and is more complex than initial projections suggest.

As for quantum computing, while strides have been made, it remains a technology that is largely experimental and not ready for widespread practical application. Quantum computers are prone to errors and require conditions that are difficult to maintain, such as extremely low temperatures. They are also extraordinarily expensive and complex to operate, which will likely limit their accessibility and integration into mainstream business operations in the near term. In other words, to get to quantum AI, we just need quantum.

Let’s look beyond the technological hurdles. There are ethical, legal, and socio-economic considerations that also play a significant role in the adoption of new technologies. Quantum AI’s impact is as much about governance, trust, and accessibility as it is about technical capability.

Science is often caught between cynicism and hype, and this is certainly not meant to be a blanket statement against the prospects of quantum AI. The potential for quantum AI is there and scientists and entrepreneurs are busy bringing it into fruition. It’s also true that machine learning can benefit quantum computing right now. For example, scientists are using machine learning techniques to find new quantum algorithms and optimize quantum operations. Researchers are also using machine learning to improve error-correction for quantum computing.

Could there be a breakthrough to shorten this timeline? Most people didn’t see the breakthrough potential of large language models, so scientific leaps should not be ruled out.

However, while the potential of quantum computing to accelerate AI is indeed a fascinating prospect, it is essential to recognize the current state of quantum technologies. As of now, they are not poised to catalyze a new computing revolution within the next decade; rather, they represent a long-term aspirational goal. The research community is still grappling with fundamental questions about how to make quantum computers reliable, scalable, and useful for a broad range of applications.

We can hope quantum AI is closer than we think, but we should probably think it’s not as close as we hope.

Quantum Connect Launches Austria’s First Quantum Machine Learning Community

23 November 2023 at 10:38
Q-Connect

Insider Brief

  • Machine learning experts are collaborating to launch Austria’s first national quantum machine learning initiative.
  • The consortium includes Gradient Zero, Anaqor, QMware and PQM.
  • The initiative aims to build an active community dedicated to the research and development of quantum machine learning applications.

PRESS RELEASE — A consortium of machine learning and quantum computing experts – Gradient Zero, Anaqor, QMware and PQML – is launching Quantum Connect (www.quantum-connect.ai), Austria’s first national quantum machine learning initiative.

The initiative aims to build an active community dedicated to the research and development of quantum machine learning applications for future use in various Austrian industries and public administration.

Developing machine learning applications that can benefit from quantum computing requires not only ML expertise, but also knowledge of the specific quantum hardware platforms and the combination of infrastructure, quantum mathematics, and machine learning. This combination of skills is difficult for individual companies to achieve, which underlines the need for interdisciplinary collaboration between all stakeholders to enable broader access to quantum machine learning. Quantum Connect brings together experts and industry partners to create a unified platform for knowledge and technology sharing.

The initiative was launched by Gradient Zero, a leading Austrian machine learning company, and funded by PQML. Through the partnership with QMware, the leading European quantum cloud company, and Anaqor, a pioneer in the European quantum ecosystem with its platform PlanQK, Quantum Connect was born.

PlanQK, a community-driven platform and ecosystem for quantum applications, with its established user base, will form the technological cornerstone of Quantum Connect and drive the exploration and development of quantum machine applications.

While Quantum Connect leverages PlanQK as a DevOps platform, QMware provides unmatched back-end efficiency by delivering its innovative hybrid quantum computing approach – a combination of classical high-performance and quantum computing resources – to run quantum applications on both simulated and native quantum hardware.

Quantum Connect offers machine learning developers direct and easy access to a fully functional quantum system. Quantum Connect is dedicated to advancing quantum machine learning and building an active community in the field.

“We are excited to launch Quantum Connect with our partners QMware and Anaqor to bring Quantum Machine Learning to Austria and beyond,” says Jona Boeddinghaus, COO at Gradient Zero. “We can’t wait to connect learners of all ages and experience levels and provide tutorials and infrastructure to those who want to dive into the world of Quantum and AI.”

Inflection-2 beats Google’s PaLM 2 across common benchmarks

By: Ryan Daws
23 November 2023 at 09:54

Inflection, an AI startup aiming to create “personal AI for everyone”, has announced a new large language model dubbed Inflection-2 that beats Google’s PaLM 2.

Inflection-2 was trained on over 5,000 NVIDIA GPUs to reach 1.025 quadrillion floating point operations (FLOPs), putting it in the same league as PaLM 2 Large. However, early benchmarks show Inflection-2 outperforming Google’s model on tests of reasoning ability, factual knowledge, and stylistic prowess.

On a range of common academic AI benchmarks, Inflection-2 achieved higher scores than PaLM 2 on most. This included outscoring the search giant’s flagship on the diverse Multi-task Middle-school Language Understanding (MMLU) tests, as well as TriviaQA, HellaSwag, and the Grade School Math (GSM8k) benchmarks:

The startup’s new model will soon power its personal assistant app Pi to enable more natural conversations and useful features.

Thrilled to announce that Inflection-2 is now the 2nd best LLM in the world! 💚✨🎉

It will be powering https://t.co/1RWFB5RHtF very soon. And available to select API partners in time. Tech report linked…

Come run with us!https://t.co/8DZwP1Qnqo

— Mustafa Suleyman (@mustafasuleyman) November 22, 2023

Inflection said its transition from NVIDIA A100 to H100 GPUs for inference – combined with optimisation work – will increase serving speed and reduce costs despite Inflection-2 being much larger than its predecessor.  

An Inflection spokesperson said this latest model brings them “a big milestone closer” towards fulfilling the mission of providing AI assistants for all. They added the team is “already looking forward” to training even larger models on their 22,000 GPU supercluster.

Safety is said to be a top priority for the researchers, with Inflection being one of the first signatories to the White House’s July 2023 voluntary AI commitments. The company said its safety team continues working to ensure models are rigorously evaluated and rely on best practices for alignment.

With impressive benchmarks and plans to scale further, Inflection’s latest effort poses a serious challenge to tech giants like Google and Microsoft who have so far dominated the field of large language models. The race is on to deliver the next generation of AI.

(Photo by Johann Walter Bantz on Unsplash)

See also: Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Inflection-2 beats Google’s PaLM 2 across common benchmarks appeared first on AI News.

Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4

By: Ryan Daws
22 November 2023 at 11:33

San Francisco-based AI startup Anthropic has unveiled Claude 2.1, an upgrade to its language model that boasts a 200,000-token context window—vastly outpacing the recently released 120,000-token GPT-4 model from OpenAI.  

The release comes on the heels of an expanded partnership with Google that provides Anthropic access to advanced processing hardware, enabling the substantial expansion of Claude’s context-handling capabilities.

Our new model Claude 2.1 offers an industry-leading 200K token context window, a 2x decrease in hallucination rates, system prompts, tool use, and updated pricing.

Claude 2.1 is available over API in our Console, and is powering our https://t.co/uLbS2JNczH chat experience. pic.twitter.com/T1XdQreluH

— Anthropic (@AnthropicAI) November 21, 2023

With the ability to process lengthy documents like full codebases or novels, Claude 2.1 is positioned to unlock new potential across applications from contract analysis to literary study. 

The 200K token window represents more than just an incremental improvement—early tests indicate Claude 2.1 can accurately grasp information from prompts over 50 percent longer than GPT-4 before the performance begins to degrade.

Claude 2.1 (200K Tokens) – Pressure Testing Long Context Recall

We all love increasing context lengths – but what's performance like?

Anthropic reached out with early access to Claude 2.1 so I repeated the “needle in a haystack” analysis I did on GPT-4

Here's what I found:… pic.twitter.com/B36KnjtJmE

— Greg Kamradt (@GregKamradt) November 21, 2023

Anthropic also touted a 50 percent reduction in hallucination rates for Claude 2.1 over version 2.0. Increased accuracy could put the model in closer competition with GPT-4 in responding precisely to complex factual queries.

Additional new features include an API tool for advanced workflow integration and “system prompts” that allow users to define Claude’s tone, goals, and rules at the outset for more personalised, contextually relevant interactions. For instance, a financial analyst could direct Claude to adopt industry terminology when summarising reports.

However, the full 200K token capacity remains exclusive to paying Claude Pro subscribers for now. Free users will continue to be limited to Claude 2.0’s 100K tokens.

As the AI landscape shifts, Claude 2.1’s enhanced precision and adaptability promise to be a game changer—presenting new options for businesses exploring how to strategically leverage AI capabilities.

With its substantial context expansion and rigorous accuracy improvements, Anthropic’s latest offering signals its determination to compete head-to-head with leading models like GPT-4.

(Image Credit: Anthropic)

See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4 appeared first on AI News.

Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

By: Ryan Daws
21 November 2023 at 10:20

In the wake of the generative AI (GenAI) revolution, UK businesses find themselves at a crossroads between unprecedented opportunities and inherent challenges.

Paul O’Sullivan, Senior Vice President of Solution Engineering (UKI) at Salesforce, sheds light on the complexities of this transformative landscape, urging businesses to tread cautiously while embracing the potential of artificial intelligence.

Unprecedented opportunities

Generative AI has stormed the scene with remarkable speed. ChatGPT, for example, amassed 100 million users in a mere two months.

“If you put that into context, it took 10 years to reach 100 million users on Netflix,” says O’Sullivan.

This rapid adoption signals a seismic shift, promising substantial economic growth. O’Sullivan estimates that generative AI has the potential to contribute a staggering £3.5 trillion ($4.4 trillion) to the global economy.

“Again, if you put that into context, that’s about as much tax as the entire US takes in,” adds O’Sullivan.

One of its key advantages lies in driving automation, with the prospect of automating up to 40 percent of the average workday—leading to significant productivity gains for businesses.

The AI trust gap

However, amid the excitement, there looms a significant challenge: the AI trust gap. 

O’Sullivan acknowledges that despite being a top priority for C-suite executives, over half of customers remain sceptical about the safety and security of AI applications.

Addressing this gap will require a multi-faceted approach including grappling with issues related to data quality and ensuring that AI systems are built on reliable, unbiased, and representative datasets. 

“Companies have struggled with data quality and data hygiene. So that’s a key area of focus,” explains O’Sullivan.

Safeguarding data privacy is also paramount, with stringent measures needed to prevent the misuse of sensitive customer information.

“Both customers and businesses are worried about data privacy—we can’t let large language models store and learn from sensitive customer data,” says O’Sullivan. “Over half of customers and their customers don’t believe AI is safe and secure today.”

Ethical considerations

AI also prompts ethical considerations. Concerns about hallucinations – where AI systems generate inaccurate or misleading information – must be addressed meticulously.

Businesses must confront biases and toxicities embedded in AI algorithms, ensuring fairness and inclusivity. Striking a balance between innovation and ethical responsibility is pivotal to gaining customer trust.

“A trustworthy AI should consistently meet expectations, adhere to commitments, and create a sense of dependability within the organisation,” explains O’Sullivan. “It’s crucial to address the limitations and the potential risks. We’ve got to be open here and lead with integrity.”

As businesses embrace AI, upskilling the workforce will also be imperative.

O’Sullivan advocates for a proactive approach, encouraging employees to master the art of prompt writing. Crafting effective prompts is vital, enabling faster and more accurate interactions with AI systems and enhancing productivity across various tasks.

Moreover, understanding AI lingo is essential to foster open conversations and enable informed decision-making within organisations.

A collaborative future

Crucially, O’Sullivan emphasises a collaborative future where AI serves as a co-pilot rather than a replacement for human expertise.

“AI, for now, lacks cognitive capability like empathy, reasoning, emotional intelligence, and ethics—and these are absolutely critical business skills that humans need to bring to the table,” says O’Sullivan.

This collaboration fosters a sense of trust, as humans act as a check and balance to ensure the responsible use of AI technology.

By addressing the AI trust gap, upskilling the workforce, and fostering a harmonious collaboration between humans and AI, businesses can harness the full potential of generative AI while building trust and confidence among customers.

You can watch our full interview with Paul O’Sullivan below:

Paul O’Sullivan and the Salesforce team will be sharing their invaluable insights at this year’s AI & Big Data Expo Global. O’Sullivan will feature on a day one panel titled ‘Converging Technologies – We Work Better Together’.

The post Paul O’Sullivan, Salesforce: Transforming work in the GenAI era appeared first on AI News.

SandboxAQ Announces AI Simulation Collaboration with NVIDIA to Impact the Physical World

21 November 2023 at 10:59
3d illustration. Model of serotonin molecule, Hormone of Happiness

Insider Brief

  • SandboxAQ announced a collaboration with NVIDIA to predict chemical reactions for drug discovery, battery design, green energy, among other use cases.
  • SandboxAQ will leverage NVIDIA quantum platforms to directly simulate the quantum mechanics.
  • Critical Quote: “Simulation is one of the most promising future technological applications, and it’s already leaving its mark today. Thanks to rapid advances in GPU hardware and quantum information science, we’re finally able to harness AI for more specialized applications that will have a profound impact on our world.” — Eric Schmidt, Chairman of SandboxAQ

PRESS RELEASE — SandboxAQ announced a collaboration with NVIDIA to predict chemical reactions for drug discovery, battery design, green energy, and more. SandboxAQ will leverage NVIDIA quantum platforms to directly simulate the quantum mechanics underpinning modern chemistry, biology and material science using tensor networks.

“Simulation is one of the most promising future technological applications, and it’s already leaving its mark today. Thanks to rapid advances in GPU hardware and quantum information science, we’re finally able to harness AI for more specialized applications that will have a profound impact on our world,” said Eric Schmidt, Chairman of SandboxAQ. “SandboxAQ’s AI simulation capabilities, augmented with NVIDIA accelerated computing and quantum platforms, will help enable the creation of new materials and chemical compounds that will transform industries and address some of the world’s biggest challenges.”

“Simulation will drive a new wave of GPU use, powering previously unattainable insights about our physical world that go beyond what extractive or generative AI are capable of unlocking. Combining Simulation with advanced AI yields solutions to problems in some of the biggest addressable markets in the world, far beyond what generative AI is capable of doing alone,” said Jack D. Hidary, CEO of SandboxAQ. “This collaboration will have a significant impact on a broad range of industries such as healthcare, energy, construction, financial services and more.”

“Advances in quantum chemistry and molecular modeling require powerful accelerated computing platforms to predict complex chemical interactions that can present countless benefits to society,” said Tim Costa, director of high performance computing and quantum at NVIDIA. “NVIDIA’s collaboration with SandboxAQ will help equip scientists to make the next generation of breakthroughs in material science.”

As part of the collaboration, SandboxAQ will be providing technical recommendations on relevant NVIDIA offerings including cuTENSOR, cuTensorNet, Quantum Computing and CUDA libraries. Tensor networks are a scalable way of representing high-dimensional data and are drawing growing interest across numerous domains: machine learning and data science, financial modeling, fluid dynamics, quantum chemistry and more. SandboxAQ will use highly GPU-optimized tensor network methods run on up to 32 NVIDIA H100 Tensor Core GPUs to solve challenging problems in science and industries. SandboxAQ also plans to leverage NVIDIA cuTENSOR and NVIDIA cuQuantum software.

This work on tensor networks will complement and supercharge SandboxAQ’s existing efforts to leverage AI and Simulation towards impact in pharmaceutical development, materials science, and beyond. Specific applications of SandboxAQ AI and Simulation include protein-ligand binding computations for large-scale undruggable targets in neurodegenerative disease, novel solutions for pose and toxicity prediction, and new AI methods to predict lifecycles for next-generation batteries composed of novel materials.

Microsoft recruits former OpenAI CEO Sam Altman and Co-Founder Greg Brockman

By: Ryan Daws
20 November 2023 at 13:44

AI experts don’t stay jobless for long, as evidenced by Microsoft’s quick recruitment of former OpenAI CEO Sam Altman and Co-Founder Greg Brockman.

Altman, who was recently ousted by OpenAI’s board for reasons that have had no shortage of speculation, has found a new home at Microsoft. The announcement came after unsuccessful negotiations with OpenAI’s board to reinstate Altman.

I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.

— Ilya Sutskever (@ilyasut) November 20, 2023

Microsoft CEO Satya Nadella – who has long expressed confidence in Altman’s vision and leadership – revealed that Altman and Brockman will lead Microsoft’s newly established advanced AI research team.

Nadella expressed excitement about the collaboration, stating, “We’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.”

I’m super excited to have you join as CEO of this new group, Sam, setting a new pace for innovation. We’ve learned a lot over the years about how to give founders and innovators space to build independent identities and cultures within Microsoft, including GitHub, Mojang Studios,…

— Satya Nadella (@satyanadella) November 20, 2023

The move follows Altman’s abrupt departure from OpenAI. Former Twitch CEO Emmett Shear has been appointed as interim CEO at OpenAI.

Today I got a call inviting me to consider a once-in-a-lifetime opportunity: to become the interim CEO of @OpenAI. After consulting with my family and reflecting on it for just a few hours, I accepted. I had recently resigned from my role as CEO of Twitch due to the birth of my…

— Emmett Shear (@eshear) November 20, 2023

Altman’s role at Microsoft is anticipated to build on the company’s strategy of allowing founders and innovators space to create independent identities, similar to Microsoft’s approach with GitHub, Mojang Studios, and LinkedIn.

Microsoft’s decision to bring Altman and Brockman on board coincides with the development of its custom AI chip. The Maia AI chip, designed to train large language models, aims to reduce dependence on Nvidia.

While Microsoft reassures its commitment to the OpenAI partnership, valued at approximately $10 billion, it emphasises ongoing innovation and support for customers and partners.

As Altman and Brockman embark on leading Microsoft’s advanced AI research team, the industry will be watching closely to see what the high-profile figures can do with Microsoft’s resources at their disposal. The industry will also be observing whether OpenAI can maintain its success under different leadership.

(Photo by Turag Photography on Unsplash)

See also: Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft recruits former OpenAI CEO Sam Altman and Co-Founder Greg Brockman appeared first on AI News.

OpenAI Strips CEO Sam Altman of Title Amidst Controversy. What next for ChatGPT maker?

By: The Quant
18 November 2023 at 11:22
Sam Altman, co-founder of OpenAI, has been removed from his position as CEO and director by the company's board. OpenAI, now worth about $80 billion, cited a failure to be "consistently candid in his communications." Altman is credited with convincing Microsoft CEO Satya Nadella to commit $10 billion to the company and leading the company's tender offer transactions this year that fueled a nearly three-fold valuation bump from $29 billion to over $80 billion. His departure leaves a significant gap in the company's fundraising efforts. Despite concerns about the potential misuse of AI technology, Altman has previously stated that "heavy regulation" wasn't needed for some time.

Umbar Shakir, Gate One: Unlocking the power of generative AI ethically

By: Ryan Daws
17 November 2023 at 08:54

Ahead of this year’s AI & Big Data Expo Global, Umbar Shakir, Partner and AI Lead at Gate One, shared her insights into the diverse landscape of generative AI (GenAI) and its impact on businesses.

From addressing the spectrum of use cases to navigating digital transformation, Shakir shed light on the challenges, ethical considerations, and the promising future of this groundbreaking technology.

Wide spectrum of use cases

Shakir highlighted the wide array of GenAI applications, ranging from productivity enhancements and research support to high-stakes areas such as strategic data mining and knowledge bots. She emphasised the transformational power of AI in understanding customer data, moving beyond simple sentiment analysis to providing actionable insights, thus elevating customer engagement strategies.

“GenAI now can take your customer insights to another level. It doesn’t just tell you whether something’s a positive or negative sentiment like old AI would do, it now says it’s positive or negative. It’s negative because X, Y, Z, and here’s the root cause for X, Y, Z,” explains Shakir.

Powering digital transformation

Gate One adopts an adaptive strategy approach, abandoning traditional five-year strategies for more agile, adaptable frameworks.

“We have a framework – our 5P model – where it’s: identify your people, identify the problem statement that you’re trying to solve for, appoint some partnerships, think about what’s the right capability mix that you have, think about the pathway through which you’re going to deliver, be use case or risk-led, and then proof of concept,” says Shakir.

By solving specific challenges and aligning strategies with business objectives, Gate One aims to drive meaningful digital transformation for its clients.

Assessing client readiness

Shakir discussed Gate One’s diagnostic tools, which blend technology maturity and operating model innovation questions to assess a client’s readiness to adopt GenAI successfully.

“We have a proprietary tool that we’ve built, a diagnostic tool where we look at blending tech maturity capability type questions with operating model innovation questions,” explains Shakir.

By categorising clients as “vanguard” or “safe” players, Gate One tailors their approach to meet individual readiness levels—ensuring a seamless integration of GenAI into the client’s operations.

Key challenges and ethical considerations

Shakir acknowledged the challenges associated with GenAI, especially concerning the quality of model outputs. She stressed the importance of addressing biases, amplifications, and ethical concerns, calling for a more meaningful and sustainable implementation of AI.

“Poor quality data or poorly trained models can create biases, racism, sexism… those are the things that worry me about the technology,” says Shakir.

Gate One is actively working on refining models and data inputs to mitigate such problems.

The future of GenAI

Looking ahead, Shakir predicted a demand for more ethical AI practices from consumers and increased pressure on developers to create representative and unbiased models.

Shakir also envisioned a shift in work dynamics where AI liberates humans from mundane tasks to allow them to focus on solving significant global challenges, particularly in the realm of sustainability.

Later this month, Gate One will be attending and sponsoring this year’s AI & Big Data Expo Global. During the event, Gate One aims to share its ethos of meaningful AI and emphasise ethical and sustainable approaches.

Gate One will also be sharing with attendees GenAI’s impact on marketing and experience design, offering valuable insights into the changing landscape of customer interactions and brand experiences.

As businesses navigate the evolving landscape of GenAI, Gate One stands at the forefront, advocating for responsible, ethical, and sustainable practices and ensuring a brighter, more impactful future for businesses and society.

Umbar Shakir and the Gate One team will be sharing their invaluable insights at this year’s AI & Big Data Expo Global. Find out more about Umbar Shakir’s day one keynote presentation here.

The post Umbar Shakir, Gate One: Unlocking the power of generative AI ethically appeared first on AI News.

Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos

By: Ryan Daws
16 November 2023 at 12:09

Amdocs has partnered with NVIDIA and Microsoft Azure to build custom Large Language Models (LLMs) for the $1.7 trillion global telecoms industry.

Leveraging the power of NVIDIA’s AI foundry service on Microsoft Azure, Amdocs aims to meet the escalating demand for data processing and analysis in the telecoms sector.

The telecoms industry processes hundreds of petabytes of data daily. With the anticipation of global data transactions surpassing 180 zettabytes by 2025, telcos are turning to generative AI to enhance efficiency and productivity.

NVIDIA’s AI foundry service – comprising the NVIDIA AI Foundation Models, NeMo framework, and DGX Cloud AI supercomputing – provides an end-to-end solution for creating and optimising custom generative AI models.

Amdocs will utilise the AI foundry service to develop enterprise-grade LLMs tailored for the telco and media industries, facilitating the deployment of generative AI use cases across various business domains.

This collaboration builds on the existing Amdocs-Microsoft partnership, ensuring the adoption of applications in secure, trusted environments, both on-premises and in the cloud.

Enterprises are increasingly focusing on developing custom models to perform industry-specific tasks. Amdocs serves over 350 of the world’s leading telecom and media companies across 90 countries. This partnership with NVIDIA opens avenues for exploring generative AI use cases, with initial applications focusing on customer care and network operations.

In customer care, the collaboration aims to accelerate the resolution of inquiries by leveraging information from across company data. In network operations, the companies are exploring solutions to address configuration, coverage, or performance issues in real-time.

This move by Amdocs positions the company at the forefront of ushering in a new era for the telecoms industry by harnessing the capabilities of custom generative AI models.

(Photo by Danist Soh on Unsplash)

See also: Wolfram Research: Injecting reliability into generative AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos appeared first on AI News.

Wolfram Research: Injecting reliability into generative AI

15 November 2023 at 10:30

The hype surrounding generative AI and the potential of large language models (LLMs), spearheaded by OpenAI’s ChatGPT, appeared at one stage to be practically insurmountable. It was certainly inescapable. More than one in four dollars invested in US startups this year went to an AI-related company, while OpenAI revealed at its recent developer conference that ChatGPT continues to be one of the fastest-growing services of all time.

Yet something continues to be amiss. Or rather, something amiss continues to be added in.

One of the biggest issues with LLMs are their ability to hallucinate. In other words, it makes things up. Figures vary, but one frequently-cited rate is at 15%-20%. One Google system notched up 27%. This would not be so bad if it did not come across so assertively while doing so. Jon McLoone, Director of Technical Communication and Strategy at Wolfram Research, likens it to the ‘loudmouth know-it-all you meet in the pub.’ “He’ll say anything that will make him seem clever,” McLoone tells AI News. “It doesn’t have to be right.”

The truth is, however, that such hallucinations are an inevitability when dealing with LLMs. As McLoone explains, it is all a question of purpose. “I think one of the things people forget, in this idea of the ‘thinking machine’, is that all of these tools are designed with a purpose in mind, and the machinery executes on that purpose,” says McLoone. “And the purpose was not to know the facts.

“The purpose that drove its creation was to be fluid; to say the kinds of things that you would expect a human to say; to be plausible,” McLoone adds. “Saying the right answer, saying the truth, is a very plausible thing, but it’s not a requirement of plausibility.

“So you get these fun things where you can say ‘explain why zebras like to eat cacti’ – and it’s doing its plausibility job,” says McLoone. “It says the kinds of things that might sound right, but of course it’s all nonsense, because it’s just being asked to sound plausible.”

What is needed, therefore, is a kind of intermediary which is able to inject a little objectivity into proceedings – and this is where Wolfram comes in. In March, the company released a ChatGPT plugin, which aims to ‘make ChatGPT smarter by giving it access to powerful computation, accurate math[s], curated knowledge, real-time data and visualisation’. Alongside being a general extension to ChatGPT, the Wolfram plugin can also synthesise code.

“It teaches the LLM to recognise the kinds of things that Wolfram|Alpha might know – our knowledge engine,” McLoone explains. “Our approach on that is completely different. We don’t scrape the web. We have human curators who give the data meaning and structure, and we lay computation on that to synthesise new knowledge, so you can ask questions of data. We’ve got a few thousand data sets built into that.”

Wolfram has always been on the side of computational technology, with McLoone, who describes himself as a ‘lifelong computation person’, having been with the company for almost 32 of its 36-year history. When it comes to AI, Wolfram therefore sits on the symbolic side of the fence, which suits logical reasoning use cases, rather than statistical AI, which suits pattern recognition and object classification.

The two systems appear directly opposed, but with more commonality than you may think. “Where I see it, [approaches to AI] all share something in common, which is all about using the machinery of computation to automate knowledge,” says McLoone. “What’s changed over that time is the concept of at what level you’re automating knowledge.

“The good old fashioned AI world of computation is humans coming up with the rules of behaviour, and then the machine is automating the execution of those rules,” adds McLoone. “So in the same way that the stick extends the caveman’s reach, the computer extends the brain’s ability to do these things, but we’re still solving the problem beforehand.

“With generative AI, it’s no longer saying ‘let’s focus on a problem and discover the rules of the problem.’ We’re now starting to say, ‘let’s just discover the rules for the world’, and then you’ve got a model that you can try and apply to different problems rather than specific ones.

“So as the automation has gone higher up the intellectual spectrum, the things have become more general, but in the end, it’s all just executing rules,” says McLoone.

What’s more, as the differing approaches to AI share a common goal, so do the companies on either side. As OpenAI was building out its plugin architecture, Wolfram was asked to be one of the first providers. “As the LLM revolution started, we started doing a bunch of analysis on what they were really capable of,” explains McLoone. “And then, as we came to this understanding of what the strengths or weaknesses were, it was about that point that OpenAI were starting to work on their plugin architecture.

“They approached us early on, because they had a little bit longer to think about this than us, since they’d seen it coming for two years,” McLoone adds. “They understood exactly this issue themselves already.”

McLoone will be demonstrating the plugin with examples at the upcoming AI & Big Data Expo Global event in London on November 30-December 1, where he is speaking. Yet he is keen to stress that there are more varied use cases out there which can benefit from the combination of ChatGPT’s mastery of unstructured language and Wolfram’s mastery of computational mathematics.

One such example is performing data science on unstructured GP medical records. This ranges from correcting peculiar transcriptions on the LLM side – replacing ‘peacemaker’ with ‘pacemaker’ as one example – to using old-fashioned computation and looking for correlations within the data. “We’re focused on chat, because it’s the most amazing thing at the moment that we can talk to a computer. But the LLM is not just about chat,” says McLoone. “They’re really great with unstructured data.”

How does McLoone see LLMs developing in the coming years? There will be various incremental improvements, and training best practices will see better results, not to mention potentially greater speed with hardware acceleration. “Where the big money goes, the architectures follow,” McLoone notes. A sea-change on the scale of the last 12 months, however, can likely be ruled out. Partly because of crippling compute costs, but also because we may have peaked in terms of training sets. If copyright rulings go against LLM providers, then training sets will shrink going forward.

The reliability problem for LLMs, however, will be forefront in McLoone’s presentation. “Things that are computational are where it’s absolutely at its weakest, it can’t really follow rules beyond really basic things,” he explains. “For anything where you’re synthesising new knowledge, or computing with data-oriented things as opposed to story-oriented things, computation really is the way still to do that.”

Yet while responses may vary – one has to account for ChatGPT’s degree of randomness after all – the combination seems to be working, so long as you give the LLM strong instructions. “I don’t know if I’ve ever seen [an LLM] actually override a fact I’ve given it,” says McLoone. “When you’re putting it in charge of the plugin, it often thinks ‘I don’t think I’ll bother calling Wolfram for this, I know the answer’, and it will make something up.

“So if it’s in charge you have to give really strong prompt engineering,” he adds. “Say ‘always use the tool if it’s anything to do with this, don’t try and go it alone’. But when it’s the other way around – when computation generates the knowledge and injects it into the LLM – I’ve never seen it ignore the facts.

“It’s just like the loudmouth guy at the pub – if you whisper the facts in his ear, he’ll happily take credit for them.”

Wolfram will be at AI & Big Data Expo Global. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Wolfram Research: Injecting reliability into generative AI appeared first on AI News.

DHS AI roadmap prioritises cybersecurity and national safety

By: Ryan Daws
15 November 2023 at 10:10

The Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has launched its inaugural Roadmap for AI.

Viewed as a crucial step in the broader governmental effort to ensure the secure development and implementation of AI capabilities, the move aligns with President Biden’s recent Executive Order.

“DHS has a broad leadership role in advancing the responsible use of AI and this cybersecurity roadmap is one important element of our work,” said Secretary of Homeland Security Alejandro N. Mayorkas.

“The Biden-Harris Administration is committed to building a secure and resilient digital ecosystem that promotes innovation and technological progress.” 

Following the Executive Order, DHS is mandated to globally promote AI safety standards, safeguard US networks and critical infrastructure, and address risks associated with AI—including potential use “to create weapons of mass destruction”.

“In last month’s Executive Order, the President called on DHS to promote the adoption of AI safety standards globally and help ensure the safe, secure, and responsible use and development of AI,” added Mayorkas.

“CISA’s roadmap lays out the steps that the agency will take as part of our Department’s broader efforts to both leverage AI and mitigate its risks to our critical infrastructure and cyber defenses.”

CISA’s roadmap outlines five strategic lines of effort, providing a blueprint for concrete initiatives and a responsible approach to integrating AI into cybersecurity.

CISA Director Jen Easterly highlighted the dual nature of AI, acknowledging its promise in enhancing cybersecurity while acknowledging the immense risks it poses.

“Artificial Intelligence holds immense promise in enhancing our nation’s cybersecurity, but as the most powerful technology of our lifetimes, it also presents enormous risks,” commented Easterly.

“Our Roadmap for AI – focused at the nexus of AI, cyber defense, and critical infrastructure – sets forth an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”

The outlined lines of effort are as follows:

  • Responsibly use AI to support our mission: CISA commits to using AI-enabled tools ethically and responsibly to strengthen cyber defense and support its critical infrastructure mission. The adoption of AI will align with constitutional principles and all relevant laws and policies.
  • Assess and Assure AI systems: CISA will assess and assist in secure AI-based software adoption across various stakeholders, establishing assurance through best practices and guidance for secure and resilient AI development.
  • Protect critical infrastructure from malicious use of AI: CISA will evaluate and recommend mitigation of AI threats to critical infrastructure, collaborating with government agencies and industry partners. The establishment of JCDC.AI aims to facilitate focused collaboration on AI-related threats.
  • Collaborate and communicate on key AI efforts: CISA commits to contributing to interagency efforts, supporting policy approaches for the US government’s national strategy on cybersecurity and AI, and coordinating with international partners to advance global AI security practices.
  • Expand AI expertise in our workforce: CISA will educate its workforce on AI systems and techniques, actively recruiting individuals with AI expertise and ensuring a comprehensive understanding of the legal, ethical, and policy aspects of AI-based software systems.

“This is a step in the right direction. It shows the government is taking the potential threats and benefits of AI seriously. The roadmap outlines a comprehensive strategy for leveraging AI to enhance cybersecurity, protect critical infrastructure, and foster collaboration. It also emphasises the importance of security in AI system design and development,” explains Joseph Thacker, AI and security researcher at AppOmni.

“The roadmap is pretty comprehensive. Nothing stands out as missing initially, although the devil is in the details when it comes to security, and even more so when it comes to a completely new technology. CISA’s ability to keep up may depend on their ability to get talent or train internal folks. Both of those are difficult to accomplish at scale.”

CISA invites stakeholders, partners, and the public to explore the Roadmap for Artificial Intelligence and gain insights into the strategic vision for AI technology and cybersecurity here.

See also: Google expands partnership with Anthropic to enhance AI safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DHS AI roadmap prioritises cybersecurity and national safety appeared first on AI News.

The Future-Proofed Datacenter: DDC Delivers 85kW Air-Cooled Density for AI and HPC Workloads

[SPONSORED GUEST ARTICLE] At DDC, the global leader in scalable datacenter-to-edge solutions, we are taking an innovative approach to building new and retrofitting legacy datacenters. Today, our patented cabinet technology can be deployed in nearly any environment or facility and supports one of the highest-density, air-cooled thermal loads—85kW per cabinet—on the market. In a recent deployment with TierPoint, a US-based colocation provider, we are supporting a 26,000 sq ft facility augmentation outside of Allentown, PA.

The post The Future-Proofed Datacenter: DDC Delivers 85kW Air-Cooled Density for AI and HPC Workloads appeared first on High-Performance Computing News Analysis | insideHPC.

❌
❌