Normal view

There are new articles available, click to refresh the page.
Yesterday — 29 November 2023Main stream

AWS and NVIDIA expand partnership to advance generative AI

By: Ryan Daws
29 November 2023 at 14:30

Amazon Web Services (AWS) and NVIDIA have announced a significant expansion of their strategic collaboration at AWS re:Invent. The collaboration aims to provide customers with state-of-the-art infrastructure, software, and services to fuel generative AI innovations.

The collaboration brings together the strengths of both companies, integrating NVIDIA’s latest multi-node systems with next-generation GPUs, CPUs, and AI software, along with AWS technologies such as Nitro System advanced virtualisation, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability.

Key highlights of the expanded collaboration include:

  1. Introduction of NVIDIA GH200 Grace Hopper Superchips on AWS:
    • AWS becomes the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology.
    • The NVIDIA GH200 NVL32 multi-node platform enables joint customers to scale to thousands of GH200 Superchips, providing supercomputer-class performance.
  2. Hosting NVIDIA DGX Cloud on AWS:
    • Collaboration to host NVIDIA DGX Cloud, an AI-training-as-a-service, on AWS, featuring GH200 NVL32 for accelerated training of generative AI and large language models.
  3. Project Ceiba supercomputer:
    • Collaboration on Project Ceiba, aiming to design the world’s fastest GPU-powered AI supercomputer with 16,384 NVIDIA GH200 Superchips and processing capability of 65 exaflops.
  4. Introduction of new Amazon EC2 instances:
    • AWS introduces three new Amazon EC2 instances, including P5e instances powered by NVIDIA H200 Tensor Core GPUs for large-scale generative AI and HPC workloads.
  5. Software innovations:
    • NVIDIA introduces software on AWS, such as NeMo Retriever microservice for chatbots and summarisation tools, and BioNeMo to speed up drug discovery for pharmaceutical companies.

This collaboration signifies a joint commitment to advancing the field of generative AI, offering customers access to cutting-edge technologies and resources.

Internally, Amazon robotics and fulfilment teams already employ NVIDIA’s Omniverse platform to optimise warehouses in virtual environments first before real-world deployment.

The integration of NVIDIA and AWS technologies will accelerate the development, training, and inference of large language models and generative AI applications across various industries.

(Photo by ANIRUDH on Unsplash)

See also: Inflection-2 beats Google’s PaLM 2 across common benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AWS and NVIDIA expand partnership to advance generative AI appeared first on AI News.

Before yesterdayMain stream

OQC Launches Toshiko, World’s First Enterprise-Ready Quantum Computing Platform, Backed by $100m SBI Investment

27 November 2023 at 16:39
OQC, a global leader in quantum compute-as-a-service, has announced the public availability of OQC Toshiko, the world's first enterprise-ready quantum computing platform. The 32-qubit platform is deployed to commercial data centres, allowing businesses worldwide to access the technology. SBI Investment, Japan's leading venture capital fund, is leading OQC's $100m funding round. The platform is named after Toshiko Yuasa, the first female Japanese physicist. OQC is collaborating with global companies including Equinix, NVIDIA, AWS and McKinsey to bring quantum computing out of the lab and into the enterprise.

Inflection-2 beats Google’s PaLM 2 across common benchmarks

By: Ryan Daws
23 November 2023 at 09:54

Inflection, an AI startup aiming to create “personal AI for everyone”, has announced a new large language model dubbed Inflection-2 that beats Google’s PaLM 2.

Inflection-2 was trained on over 5,000 NVIDIA GPUs to reach 1.025 quadrillion floating point operations (FLOPs), putting it in the same league as PaLM 2 Large. However, early benchmarks show Inflection-2 outperforming Google’s model on tests of reasoning ability, factual knowledge, and stylistic prowess.

On a range of common academic AI benchmarks, Inflection-2 achieved higher scores than PaLM 2 on most. This included outscoring the search giant’s flagship on the diverse Multi-task Middle-school Language Understanding (MMLU) tests, as well as TriviaQA, HellaSwag, and the Grade School Math (GSM8k) benchmarks:

The startup’s new model will soon power its personal assistant app Pi to enable more natural conversations and useful features.

Thrilled to announce that Inflection-2 is now the 2nd best LLM in the world! 💚✨🎉

It will be powering https://t.co/1RWFB5RHtF very soon. And available to select API partners in time. Tech report linked…

Come run with us!https://t.co/8DZwP1Qnqo

— Mustafa Suleyman (@mustafasuleyman) November 22, 2023

Inflection said its transition from NVIDIA A100 to H100 GPUs for inference – combined with optimisation work – will increase serving speed and reduce costs despite Inflection-2 being much larger than its predecessor.  

An Inflection spokesperson said this latest model brings them “a big milestone closer” towards fulfilling the mission of providing AI assistants for all. They added the team is “already looking forward” to training even larger models on their 22,000 GPU supercluster.

Safety is said to be a top priority for the researchers, with Inflection being one of the first signatories to the White House’s July 2023 voluntary AI commitments. The company said its safety team continues working to ensure models are rigorously evaluated and rely on best practices for alignment.

With impressive benchmarks and plans to scale further, Inflection’s latest effort poses a serious challenge to tech giants like Google and Microsoft who have so far dominated the field of large language models. The race is on to deliver the next generation of AI.

(Photo by Johann Walter Bantz on Unsplash)

See also: Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Inflection-2 beats Google’s PaLM 2 across common benchmarks appeared first on AI News.

Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4

By: Ryan Daws
22 November 2023 at 11:33

San Francisco-based AI startup Anthropic has unveiled Claude 2.1, an upgrade to its language model that boasts a 200,000-token context window—vastly outpacing the recently released 120,000-token GPT-4 model from OpenAI.  

The release comes on the heels of an expanded partnership with Google that provides Anthropic access to advanced processing hardware, enabling the substantial expansion of Claude’s context-handling capabilities.

Our new model Claude 2.1 offers an industry-leading 200K token context window, a 2x decrease in hallucination rates, system prompts, tool use, and updated pricing.

Claude 2.1 is available over API in our Console, and is powering our https://t.co/uLbS2JNczH chat experience. pic.twitter.com/T1XdQreluH

— Anthropic (@AnthropicAI) November 21, 2023

With the ability to process lengthy documents like full codebases or novels, Claude 2.1 is positioned to unlock new potential across applications from contract analysis to literary study. 

The 200K token window represents more than just an incremental improvement—early tests indicate Claude 2.1 can accurately grasp information from prompts over 50 percent longer than GPT-4 before the performance begins to degrade.

Claude 2.1 (200K Tokens) – Pressure Testing Long Context Recall

We all love increasing context lengths – but what's performance like?

Anthropic reached out with early access to Claude 2.1 so I repeated the “needle in a haystack” analysis I did on GPT-4

Here's what I found:… pic.twitter.com/B36KnjtJmE

— Greg Kamradt (@GregKamradt) November 21, 2023

Anthropic also touted a 50 percent reduction in hallucination rates for Claude 2.1 over version 2.0. Increased accuracy could put the model in closer competition with GPT-4 in responding precisely to complex factual queries.

Additional new features include an API tool for advanced workflow integration and “system prompts” that allow users to define Claude’s tone, goals, and rules at the outset for more personalised, contextually relevant interactions. For instance, a financial analyst could direct Claude to adopt industry terminology when summarising reports.

However, the full 200K token capacity remains exclusive to paying Claude Pro subscribers for now. Free users will continue to be limited to Claude 2.0’s 100K tokens.

As the AI landscape shifts, Claude 2.1’s enhanced precision and adaptability promise to be a game changer—presenting new options for businesses exploring how to strategically leverage AI capabilities.

With its substantial context expansion and rigorous accuracy improvements, Anthropic’s latest offering signals its determination to compete head-to-head with leading models like GPT-4.

(Image Credit: Anthropic)

See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic upsizes Claude 2.1 to 200K tokens, nearly doubling GPT-4 appeared first on AI News.

Paul O’Sullivan, Salesforce: Transforming work in the GenAI era

By: Ryan Daws
21 November 2023 at 10:20

In the wake of the generative AI (GenAI) revolution, UK businesses find themselves at a crossroads between unprecedented opportunities and inherent challenges.

Paul O’Sullivan, Senior Vice President of Solution Engineering (UKI) at Salesforce, sheds light on the complexities of this transformative landscape, urging businesses to tread cautiously while embracing the potential of artificial intelligence.

Unprecedented opportunities

Generative AI has stormed the scene with remarkable speed. ChatGPT, for example, amassed 100 million users in a mere two months.

“If you put that into context, it took 10 years to reach 100 million users on Netflix,” says O’Sullivan.

This rapid adoption signals a seismic shift, promising substantial economic growth. O’Sullivan estimates that generative AI has the potential to contribute a staggering £3.5 trillion ($4.4 trillion) to the global economy.

“Again, if you put that into context, that’s about as much tax as the entire US takes in,” adds O’Sullivan.

One of its key advantages lies in driving automation, with the prospect of automating up to 40 percent of the average workday—leading to significant productivity gains for businesses.

The AI trust gap

However, amid the excitement, there looms a significant challenge: the AI trust gap. 

O’Sullivan acknowledges that despite being a top priority for C-suite executives, over half of customers remain sceptical about the safety and security of AI applications.

Addressing this gap will require a multi-faceted approach including grappling with issues related to data quality and ensuring that AI systems are built on reliable, unbiased, and representative datasets. 

“Companies have struggled with data quality and data hygiene. So that’s a key area of focus,” explains O’Sullivan.

Safeguarding data privacy is also paramount, with stringent measures needed to prevent the misuse of sensitive customer information.

“Both customers and businesses are worried about data privacy—we can’t let large language models store and learn from sensitive customer data,” says O’Sullivan. “Over half of customers and their customers don’t believe AI is safe and secure today.”

Ethical considerations

AI also prompts ethical considerations. Concerns about hallucinations – where AI systems generate inaccurate or misleading information – must be addressed meticulously.

Businesses must confront biases and toxicities embedded in AI algorithms, ensuring fairness and inclusivity. Striking a balance between innovation and ethical responsibility is pivotal to gaining customer trust.

“A trustworthy AI should consistently meet expectations, adhere to commitments, and create a sense of dependability within the organisation,” explains O’Sullivan. “It’s crucial to address the limitations and the potential risks. We’ve got to be open here and lead with integrity.”

As businesses embrace AI, upskilling the workforce will also be imperative.

O’Sullivan advocates for a proactive approach, encouraging employees to master the art of prompt writing. Crafting effective prompts is vital, enabling faster and more accurate interactions with AI systems and enhancing productivity across various tasks.

Moreover, understanding AI lingo is essential to foster open conversations and enable informed decision-making within organisations.

A collaborative future

Crucially, O’Sullivan emphasises a collaborative future where AI serves as a co-pilot rather than a replacement for human expertise.

“AI, for now, lacks cognitive capability like empathy, reasoning, emotional intelligence, and ethics—and these are absolutely critical business skills that humans need to bring to the table,” says O’Sullivan.

This collaboration fosters a sense of trust, as humans act as a check and balance to ensure the responsible use of AI technology.

By addressing the AI trust gap, upskilling the workforce, and fostering a harmonious collaboration between humans and AI, businesses can harness the full potential of generative AI while building trust and confidence among customers.

You can watch our full interview with Paul O’Sullivan below:

Paul O’Sullivan and the Salesforce team will be sharing their invaluable insights at this year’s AI & Big Data Expo Global. O’Sullivan will feature on a day one panel titled ‘Converging Technologies – We Work Better Together’.

The post Paul O’Sullivan, Salesforce: Transforming work in the GenAI era appeared first on AI News.

Microsoft recruits former OpenAI CEO Sam Altman and Co-Founder Greg Brockman

By: Ryan Daws
20 November 2023 at 13:44

AI experts don’t stay jobless for long, as evidenced by Microsoft’s quick recruitment of former OpenAI CEO Sam Altman and Co-Founder Greg Brockman.

Altman, who was recently ousted by OpenAI’s board for reasons that have had no shortage of speculation, has found a new home at Microsoft. The announcement came after unsuccessful negotiations with OpenAI’s board to reinstate Altman.

I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.

— Ilya Sutskever (@ilyasut) November 20, 2023

Microsoft CEO Satya Nadella – who has long expressed confidence in Altman’s vision and leadership – revealed that Altman and Brockman will lead Microsoft’s newly established advanced AI research team.

Nadella expressed excitement about the collaboration, stating, “We’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.”

I’m super excited to have you join as CEO of this new group, Sam, setting a new pace for innovation. We’ve learned a lot over the years about how to give founders and innovators space to build independent identities and cultures within Microsoft, including GitHub, Mojang Studios,…

— Satya Nadella (@satyanadella) November 20, 2023

The move follows Altman’s abrupt departure from OpenAI. Former Twitch CEO Emmett Shear has been appointed as interim CEO at OpenAI.

Today I got a call inviting me to consider a once-in-a-lifetime opportunity: to become the interim CEO of @OpenAI. After consulting with my family and reflecting on it for just a few hours, I accepted. I had recently resigned from my role as CEO of Twitch due to the birth of my…

— Emmett Shear (@eshear) November 20, 2023

Altman’s role at Microsoft is anticipated to build on the company’s strategy of allowing founders and innovators space to create independent identities, similar to Microsoft’s approach with GitHub, Mojang Studios, and LinkedIn.

Microsoft’s decision to bring Altman and Brockman on board coincides with the development of its custom AI chip. The Maia AI chip, designed to train large language models, aims to reduce dependence on Nvidia.

While Microsoft reassures its commitment to the OpenAI partnership, valued at approximately $10 billion, it emphasises ongoing innovation and support for customers and partners.

As Altman and Brockman embark on leading Microsoft’s advanced AI research team, the industry will be watching closely to see what the high-profile figures can do with Microsoft’s resources at their disposal. The industry will also be observing whether OpenAI can maintain its success under different leadership.

(Photo by Turag Photography on Unsplash)

See also: Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft recruits former OpenAI CEO Sam Altman and Co-Founder Greg Brockman appeared first on AI News.

Umbar Shakir, Gate One: Unlocking the power of generative AI ethically

By: Ryan Daws
17 November 2023 at 08:54

Ahead of this year’s AI & Big Data Expo Global, Umbar Shakir, Partner and AI Lead at Gate One, shared her insights into the diverse landscape of generative AI (GenAI) and its impact on businesses.

From addressing the spectrum of use cases to navigating digital transformation, Shakir shed light on the challenges, ethical considerations, and the promising future of this groundbreaking technology.

Wide spectrum of use cases

Shakir highlighted the wide array of GenAI applications, ranging from productivity enhancements and research support to high-stakes areas such as strategic data mining and knowledge bots. She emphasised the transformational power of AI in understanding customer data, moving beyond simple sentiment analysis to providing actionable insights, thus elevating customer engagement strategies.

“GenAI now can take your customer insights to another level. It doesn’t just tell you whether something’s a positive or negative sentiment like old AI would do, it now says it’s positive or negative. It’s negative because X, Y, Z, and here’s the root cause for X, Y, Z,” explains Shakir.

Powering digital transformation

Gate One adopts an adaptive strategy approach, abandoning traditional five-year strategies for more agile, adaptable frameworks.

“We have a framework – our 5P model – where it’s: identify your people, identify the problem statement that you’re trying to solve for, appoint some partnerships, think about what’s the right capability mix that you have, think about the pathway through which you’re going to deliver, be use case or risk-led, and then proof of concept,” says Shakir.

By solving specific challenges and aligning strategies with business objectives, Gate One aims to drive meaningful digital transformation for its clients.

Assessing client readiness

Shakir discussed Gate One’s diagnostic tools, which blend technology maturity and operating model innovation questions to assess a client’s readiness to adopt GenAI successfully.

“We have a proprietary tool that we’ve built, a diagnostic tool where we look at blending tech maturity capability type questions with operating model innovation questions,” explains Shakir.

By categorising clients as “vanguard” or “safe” players, Gate One tailors their approach to meet individual readiness levels—ensuring a seamless integration of GenAI into the client’s operations.

Key challenges and ethical considerations

Shakir acknowledged the challenges associated with GenAI, especially concerning the quality of model outputs. She stressed the importance of addressing biases, amplifications, and ethical concerns, calling for a more meaningful and sustainable implementation of AI.

“Poor quality data or poorly trained models can create biases, racism, sexism… those are the things that worry me about the technology,” says Shakir.

Gate One is actively working on refining models and data inputs to mitigate such problems.

The future of GenAI

Looking ahead, Shakir predicted a demand for more ethical AI practices from consumers and increased pressure on developers to create representative and unbiased models.

Shakir also envisioned a shift in work dynamics where AI liberates humans from mundane tasks to allow them to focus on solving significant global challenges, particularly in the realm of sustainability.

Later this month, Gate One will be attending and sponsoring this year’s AI & Big Data Expo Global. During the event, Gate One aims to share its ethos of meaningful AI and emphasise ethical and sustainable approaches.

Gate One will also be sharing with attendees GenAI’s impact on marketing and experience design, offering valuable insights into the changing landscape of customer interactions and brand experiences.

As businesses navigate the evolving landscape of GenAI, Gate One stands at the forefront, advocating for responsible, ethical, and sustainable practices and ensuring a brighter, more impactful future for businesses and society.

Umbar Shakir and the Gate One team will be sharing their invaluable insights at this year’s AI & Big Data Expo Global. Find out more about Umbar Shakir’s day one keynote presentation here.

The post Umbar Shakir, Gate One: Unlocking the power of generative AI ethically appeared first on AI News.

Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos

By: Ryan Daws
16 November 2023 at 12:09

Amdocs has partnered with NVIDIA and Microsoft Azure to build custom Large Language Models (LLMs) for the $1.7 trillion global telecoms industry.

Leveraging the power of NVIDIA’s AI foundry service on Microsoft Azure, Amdocs aims to meet the escalating demand for data processing and analysis in the telecoms sector.

The telecoms industry processes hundreds of petabytes of data daily. With the anticipation of global data transactions surpassing 180 zettabytes by 2025, telcos are turning to generative AI to enhance efficiency and productivity.

NVIDIA’s AI foundry service – comprising the NVIDIA AI Foundation Models, NeMo framework, and DGX Cloud AI supercomputing – provides an end-to-end solution for creating and optimising custom generative AI models.

Amdocs will utilise the AI foundry service to develop enterprise-grade LLMs tailored for the telco and media industries, facilitating the deployment of generative AI use cases across various business domains.

This collaboration builds on the existing Amdocs-Microsoft partnership, ensuring the adoption of applications in secure, trusted environments, both on-premises and in the cloud.

Enterprises are increasingly focusing on developing custom models to perform industry-specific tasks. Amdocs serves over 350 of the world’s leading telecom and media companies across 90 countries. This partnership with NVIDIA opens avenues for exploring generative AI use cases, with initial applications focusing on customer care and network operations.

In customer care, the collaboration aims to accelerate the resolution of inquiries by leveraging information from across company data. In network operations, the companies are exploring solutions to address configuration, coverage, or performance issues in real-time.

This move by Amdocs positions the company at the forefront of ushering in a new era for the telecoms industry by harnessing the capabilities of custom generative AI models.

(Photo by Danist Soh on Unsplash)

See also: Wolfram Research: Injecting reliability into generative AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amdocs, NVIDIA and Microsoft Azure build custom LLMs for telcos appeared first on AI News.

Global Investment Summit: UK Showcases Pioneering Innovations to World’s Top CEOs and Investors

14 November 2023 at 15:23
The UK's Global Investment Summit will host over 200 CEOs, including Stephen Schwarzman of Blackstone, David Solomon of Goldman Sachs, Amanda Blanc of Aviva, Ignacio Galán of Iberdrola, and Jamie Dimon of JP Morgan Chase. The summit will showcase British innovations in AI, quantum computing, agri-tech, clean growth, advanced manufacturing, life sciences and fashion. Barclays, HSBC and Lloyds Bank are confirmed sponsors. Companies such as McLaren, Aston Martin, Fruit Cast Ltd, Delta G, Quantum DX, Tokamak Energy and Core Power will exhibit their latest innovations. The summit is set to secure billions in investment for the UK economy.

Quantum Tech Industry Needs Diverse Workforce.

14 November 2023 at 15:03
A recent article in Nature highlights the issues with Quantum Education. Quantum is an entire industry, but it's still nascent, and we are potentially a long way from some of the purported benefits. The fundamentals are sound, but Quantum will feel like a potential solution without a real problem for many. However, as the field and Quantum Tech Industry progresses, more and more people are looking at how to educate themselves in all things quantum.

GitLab’s new AI capabilities empower DevSecOps

By: Ryan Daws
13 November 2023 at 17:27

GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases.

The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions.

David DeSanto, Chief Product Officer at GitLab, said: “To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing DevSecOps teams to benefit from boosts to security, efficiency, and collaboration.”

GitLab Duo Chat – arguably the star of the show – provides users with invaluable insights, guidance, and suggestions. Beyond code analysis, it supports planning, security issue comprehension and resolution, troubleshooting CI/CD pipeline failures, aiding in merge requests, and more.

As part of GitLab’s commitment to providing a comprehensive AI-powered experience, Duo Chat joins Code Suggestions as the primary interface into GitLab’s AI suite within its DevSecOps platform.

GitLab Duo comprises a suite of 14 AI capabilities:

  • Suggested Reviewers
  • Code Suggestions
  • Chat
  • Vulnerability Summary
  • Code Explanation
  • Planning Discussions Summary
  • Merge Request Summary
  • Merge Request Template Population
  • Code Review Summary
  • Test Generation
  • Git Suggestions
  • Root Cause Analysis
  • Planning Description Generation
  • Value Stream Forecasting

In response to the evolving needs of development, security, and operations teams, Code Suggestions is now generally available. This feature assists in creating and updating code, reducing cognitive load, enhancing efficiency, and accelerating secure software development.

GitLab’s commitment to privacy and transparency stands out in the AI space. According to the GitLab report, 83 percent of DevSecOps professionals consider implementing AI in their processes essential, with 95 percent prioritising privacy and intellectual property protection in AI tool selection.

The State of AI in Software Development report by GitLab reveals that developers spend just 25 percent of their time writing code. The Duo suite aims to address this by reducing toolchain sprawl—enabling 7x faster cycle times, heightened developer productivity, and reduced software spend.

Kate Holterhoff, Industry Analyst at Redmonk, commented: “The developers we speak with at RedMonk are keenly interested in the productivity and efficiency gains that code assistants promise.

“GitLab’s Duo Code Suggestions is a welcome player in this space, expanding the available options for enabling an AI-enhanced software development lifecycle.”

(Photo by Pankaj Patel on Unsplash)

See also: OpenAI battles DDoS against its API and ChatGPT services

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

IonQ Reports $6.1M Q3 Revenue, Achieves $100M in Bookings in Three Years of Commercialization.

10 November 2023 at 15:30
Quantum computing company IonQ reports Q3 2023 revenues of $6.1 million, a 122% increase from the previous year. The company also announced a $25.5 million sale of its Quantum Networking System to the US Air Force Research Lab. IonQ's CEO, Peter Chapman, highlighted the company's achievement of $100 million in cumulative bookings within three years of commercialization. The company also unveiled two new quantum computers, IonQ Forte Enterprise, and IonQ Tempo, and aims to achieve a 64-qubit system by the end of 2025.

Google expands partnership with Anthropic to enhance AI safety

By: Ryan Daws
10 November 2023 at 15:56

Google has announced the expansion of its partnership with Anthropic to work towards achieving the highest standards of AI safety.

The collaboration between Google and Anthropic dates back to the founding of Anthropic in 2021. The two companies have closely collaborated, with Anthropic building one of the largest Google Kubernetes Engine (GKE) clusters in the industry.

“Our longstanding partnership with Google is founded on a shared commitment to develop AI responsibly and deploy it in a way that benefits society,” said Dario Amodei, co-founder and CEO of Anthropic.

“We look forward to our continued collaboration as we work to make steerable, reliable and interpretable AI systems available to more businesses around the world.”

Anthropic utilises Google’s AlloyDB, a fully managed PostgreSQL-compatible database, for handling transactional data with high performance and reliability. Additionally, Google’s BigQuery data warehouse is employed to analyse vast datasets, extracting valuable insights for Anthropic’s operations.

As part of the expanded partnership, Anthropic will leverage Google’s latest generation Cloud TPU v5e chips for AI inference. Anthropic will use the chips to efficiently scale its powerful Claude large language model, which ranks only behind GPT-4 in many benchmarks.

The announcement comes on the heels of both companies participating in the inaugural AI Safety Summit (AISS) at Bletchley Park, hosted by the UK government. The summit brought together government officials, technology leaders, and experts to address concerns around frontier AI.

Google and Anthropic are also engaged in collaborative efforts with the Frontier Model Forum and MLCommons, contributing to the development of robust measures for AI safety.

To enhance security for organisations deploying Anthropic’s models on Google Cloud, Anthropic is now utilising Google Cloud’s security services. This includes Chronicle Security Operations, Secure Enterprise Browsing, and Security Command Center, providing visibility, threat detection, and access control.

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” commented Thomas Kurian, CEO of Google Cloud. 

“This expanded partnership with Anthropic – built on years of working together – will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

Google and Anthropic’s expanded partnership promises to be a critical step in advancing AI safety standards and fostering responsible development.

(Photo by charlesdeluvio on Unsplash)

See also: Amazon is building a LLM to rival OpenAI and Google

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google expands partnership with Anthropic to enhance AI safety appeared first on AI News.

D-Wave Quantum Reports Record Q3 Results with 50% Revenue Increase

D-Wave, reported a 50% increase in revenue to $2.6 million for Q3 2023. The company's cash balance reached $53.3 million, the highest in its history. CEO Dr. Alan Baratz highlighted the company's growth in customer bookings and commercial revenue. D-Wave has signed new agreements with BBVA, QuantumBasel, NTT Docomo, Poznan Superconducting and Networking Center, and Satispay. The company is also exploring integrating its quantum technology with machine learning. D-Wave has made significant progress in the development of high-coherence qubits and quantum error mitigation.

OpenAI battles DDoS against its API and ChatGPT services

By: Ryan Daws
9 November 2023 at 15:50

OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours.

While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with “periodic outages due to an abnormal traffic pattern reflective of a DDoS attack.”

Users affected by these incidents reported encountering errors such as “something seems to have gone wrong” and “There was an error generating a response” when accessing ChatGPT.

This recent wave of attacks follows a major outage that impacted ChatGPT and its API on Wednesday, along with partial ChatGPT outages on Tuesday, and elevated error rates in Dall-E on Monday.

OpenAI displayed a banner across ChatGPT’s interface, attributing the disruptions to “exceptionally high demand” and reassuring users that efforts were underway to scale their systems.

Threat actor group Anonymous Sudan has claimed responsibility for the DDoS attacks on OpenAI. According to the group, the attacks are in response to OpenAI’s perceived bias towards Israel and against Palestine.

The attackers utilised the SkyNet botnet, which recently incorporated support for application layer attacks or Layer 7 (L7) DDoS attacks. In Layer 7 attacks, threat actors overwhelm services at the application level with a massive volume of requests to strain the targets’ server and network resources.

Brad Freeman, Director of Technology at SenseOn, commented:

“Distributed denial of service attacks are internet vandalism. Low effort, complexity, and in most cases more of a nuisance than a long-term threat to a business. Often DDOS attacks target services with high volumes of traffic which can be ’off-ramped, by their cloud or Internet service provider.

However, as the attacks are on Layer 7 they will be targeting the application itself, therefore OpenAI will need to make some changes to mitigate the attack. It’s likely the threat actor is sending complex queries to OpenAI to overload it, I wonder if they are using AI-generated content to attack AI content generation.”

However, the attribution of these attacks to Anonymous Sudan has raised suspicions among cybersecurity researchers. Some experts suggest that this could be a false flag operation and the group might have connections to Russia instead which, along with Iran, is suspected of stoking the bloodshed and international outrage to benefit its domestic interests.

The situation once again highlights the ongoing challenges faced by organisations dealing with DDoS attacks and the complexities of accurately identifying the perpetrators.

(Photo by Johann Walter Bantz on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI battles DDoS against its API and ChatGPT services appeared first on AI News.

Amazon is building a LLM to rival OpenAI and Google

By: Ryan Daws
8 November 2023 at 14:53

Amazon is reportedly making substantial investments in the development of a large language model (LLM) named Olympus. 

According to Reuters, the tech giant is pouring millions into this project to create a model with a staggering two trillion parameters. OpenAI’s GPT-4, for comparison, is estimated to have around one trillion parameters.

This move puts Amazon in direct competition with OpenAI, Meta, Anthropic, Google, and others. The team behind Amazon’s initiative is led by Rohit Prasad, former head of Alexa, who now reports directly to CEO Andy Jassy.

Prasad, as the head scientist of artificial general intelligence (AGI) at Amazon, has unified AI efforts across the company. He brought in researchers from the Alexa AI team and Amazon’s science division to collaborate on training models, aligning Amazon’s resources towards this ambitious goal.

Amazon’s decision to invest in developing homegrown models stems from the belief that having their own LLMs could enhance the attractiveness of their offerings, particularly on Amazon Web Services (AWS).

Enterprises on AWS are constantly seeking top-performing models and Amazon’s move aims to cater to the growing demand for advanced AI technologies.

While Amazon has not provided a specific timeline for the release of the Olympus model, insiders suggest that the company’s focus on training larger AI models underscores its commitment to remaining at the forefront of AI research and development.

Training such massive AI models is a costly endeavour, primarily due to the significant computing power required.

Amazon’s decision to invest heavily in LLMs is part of its broader strategy, as revealed in an earnings call in April. During the call, Amazon executives announced increased investments in LLMs and generative AI while reducing expenditures on retail fulfillment and transportation.

Amazon’s move signals a new chapter in the race for AI supremacy, with major players vying to push the boundaries of the technology.

(Photo by ANIRUDH on Unsplash)

See also: OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon is building a LLM to rival OpenAI and Google appeared first on AI News.

OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing

By: Ryan Daws
7 November 2023 at 11:59

OpenAI has announced a slew of new additions and improvements to its platform, alongside reduced pricing, aimed at empowering developers and enhancing user experience.

Following yesterday’s leak of a custom GPT-4 chatbot creator, OpenAI unveiled several other key features during its DevDay that promise a transformative impact on the landscape of AI applications:

  • GPT-4 Turbo: OpenAI introduced the preview of GPT-4 Turbo, the next generation of its renowned language model. This new iteration boasts enhanced capabilities and an extensive knowledge base encompassing world events up until April 2023.
    • One of GPT-4 Turbo’s standout features is the impressive 128K context window, allowing it to process the equivalent of more than 300 pages of text in a single prompt.
    • Notably, OpenAI has optimised the pricing structure, making GPT-4 Turbo 3x cheaper for input tokens and 2x cheaper for output tokens compared to its predecessor.
  • Assistants API: OpenAI also unveiled the Assistants API, a tool designed to simplify the process of building agent-like experiences within applications.
    • The API equips developers with the ability to create purpose-built AIs with specific instructions, leveraging additional knowledge and calling models and tools to perform tasks.
  • Multimodal capabilities: OpenAI’s platform now supports a range of multimodal capabilities, including vision, image creation (DALL·E 3), and text-to-speech (TTS).
    • GPT-4 Turbo can process images, opening up possibilities such as generating captions, detailed image analysis, and reading documents with figures.
    • Additionally, DALL·E 3 integration allows developers to create images and designs programmatically, while the text-to-speech API enables the generation of human-quality speech from text.
  • Pricing overhaul: OpenAI has significantly reduced prices across its platform, making it more accessible to developers.
    • GPT-4 Turbo input tokens are now 3x cheaper than its predecessor at $0.01, and output tokens are 2x cheaper at $0.03. Similar reductions apply to GPT-3.5 Turbo, catering to various user requirements and ensuring affordability.
  • Copyright Shield: To bolster customer protection, OpenAI has introduced Copyright Shield.
    • This initiative sees OpenAI stepping in to defend customers and cover the associated legal costs if they face copyright infringement claims related to the generally available features of ChatGPT Enterprise and the developer platform.

OpenAI’s latest announcements mark a significant stride in the company’s mission to democratise AI technology, empowering developers to create innovative and intelligent applications across various domains.

See also: OpenAI set to unveil custom GPT-4 chatbot creator

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing appeared first on AI News.

Cleveland Clinic and IBM Launch Quantum Innovation Program for Healthcare Start-ups

By: The Quant
6 November 2023 at 19:34
Cleveland Clinic has launched the Quantum Innovation Catalyzer Program, a competitive initiative for start-ups to explore quantum computing applications in healthcare and life sciences. Four companies will be selected to receive a 24-week immersive experience, including access to IBM Quantum System One computer for research. The program is part of Cleveland Clinic’s and IBM’s 10-year partnership aimed at advancing biomedical research through quantum and advanced computing. The application for the program is open until January 15, with the program launching in April.

Analog Quantum Circuits Secures $3M from Uniseed for Quantum Computing Development

6 November 2023 at 19:32
EQUS start-up Analog Quantum Circuits (AQC) has secured a $3 AUD million investment from Uniseed for the development of key components for quantum computing. AQC, founded by EQUS ( Excellence for Engineered Quantum Systems) Chief Investigators Professor Tom Stace and Associate Professor Arkady Fedorov, aims to meet the needs of the growing quantum computing industry. The company develops core microwave technologies for superconducting quantum computers, considered one of the most promising platforms globally. The technology, which has been in development for over five years, is based on research funded by the Australian Research Council through EQUS and Future Fellowships held by the founders.
❌
❌