Normal view
-
Quantum Zeitgeist
- Classiq, NVIDIA, and Tel Aviv Medical Center Launch Quantum Computing Initiative for HealthcareQuantum software company Classiq, in collaboration with NVIDIA and the Tel Aviv Sourasky Medical Center, has launched the Quantum Computing for Life Sciences & Healthcare Center. The initiative aims to develop quantum algorithms and applications to revolutionise life sciences and healthcare, including drug discovery, molecular analysis, and personalised medical treatments. The centre will also address challenges in supply chain and treatment coordination. Classiq CEO Nir Minerbi believes the
-
AI News
- Wolfram Research: Injecting reliability into generative AIThe hype surrounding generative AI and the potential of large language models (LLMs), spearheaded by OpenAI’s ChatGPT, appeared at one stage to be practically insurmountable. It was certainly inescapable. More than one in four dollars invested in US startups this year went to an AI-related company, while OpenAI revealed at its recent developer conference that ChatGPT continues to be one of the fastest-growing services of all time. Yet something continues to be amiss. Or rather, something amis
Wolfram Research: Injecting reliability into generative AI
The hype surrounding generative AI and the potential of large language models (LLMs), spearheaded by OpenAI’s ChatGPT, appeared at one stage to be practically insurmountable. It was certainly inescapable. More than one in four dollars invested in US startups this year went to an AI-related company, while OpenAI revealed at its recent developer conference that ChatGPT continues to be one of the fastest-growing services of all time.
Yet something continues to be amiss. Or rather, something amiss continues to be added in.

One of the biggest issues with LLMs are their ability to hallucinate. In other words, it makes things up. Figures vary, but one frequently-cited rate is at 15%-20%. One Google system notched up 27%. This would not be so bad if it did not come across so assertively while doing so. Jon McLoone, Director of Technical Communication and Strategy at Wolfram Research, likens it to the ‘loudmouth know-it-all you meet in the pub.’ “He’ll say anything that will make him seem clever,” McLoone tells AI News. “It doesn’t have to be right.”
The truth is, however, that such hallucinations are an inevitability when dealing with LLMs. As McLoone explains, it is all a question of purpose. “I think one of the things people forget, in this idea of the ‘thinking machine’, is that all of these tools are designed with a purpose in mind, and the machinery executes on that purpose,” says McLoone. “And the purpose was not to know the facts.
“The purpose that drove its creation was to be fluid; to say the kinds of things that you would expect a human to say; to be plausible,” McLoone adds. “Saying the right answer, saying the truth, is a very plausible thing, but it’s not a requirement of plausibility.
“So you get these fun things where you can say ‘explain why zebras like to eat cacti’ – and it’s doing its plausibility job,” says McLoone. “It says the kinds of things that might sound right, but of course it’s all nonsense, because it’s just being asked to sound plausible.”
What is needed, therefore, is a kind of intermediary which is able to inject a little objectivity into proceedings – and this is where Wolfram comes in. In March, the company released a ChatGPT plugin, which aims to ‘make ChatGPT smarter by giving it access to powerful computation, accurate math[s], curated knowledge, real-time data and visualisation’. Alongside being a general extension to ChatGPT, the Wolfram plugin can also synthesise code.
“It teaches the LLM to recognise the kinds of things that Wolfram|Alpha might know – our knowledge engine,” McLoone explains. “Our approach on that is completely different. We don’t scrape the web. We have human curators who give the data meaning and structure, and we lay computation on that to synthesise new knowledge, so you can ask questions of data. We’ve got a few thousand data sets built into that.”
Wolfram has always been on the side of computational technology, with McLoone, who describes himself as a ‘lifelong computation person’, having been with the company for almost 32 of its 36-year history. When it comes to AI, Wolfram therefore sits on the symbolic side of the fence, which suits logical reasoning use cases, rather than statistical AI, which suits pattern recognition and object classification.
The two systems appear directly opposed, but with more commonality than you may think. “Where I see it, [approaches to AI] all share something in common, which is all about using the machinery of computation to automate knowledge,” says McLoone. “What’s changed over that time is the concept of at what level you’re automating knowledge.
“The good old fashioned AI world of computation is humans coming up with the rules of behaviour, and then the machine is automating the execution of those rules,” adds McLoone. “So in the same way that the stick extends the caveman’s reach, the computer extends the brain’s ability to do these things, but we’re still solving the problem beforehand.
“With generative AI, it’s no longer saying ‘let’s focus on a problem and discover the rules of the problem.’ We’re now starting to say, ‘let’s just discover the rules for the world’, and then you’ve got a model that you can try and apply to different problems rather than specific ones.
“So as the automation has gone higher up the intellectual spectrum, the things have become more general, but in the end, it’s all just executing rules,” says McLoone.
What’s more, as the differing approaches to AI share a common goal, so do the companies on either side. As OpenAI was building out its plugin architecture, Wolfram was asked to be one of the first providers. “As the LLM revolution started, we started doing a bunch of analysis on what they were really capable of,” explains McLoone. “And then, as we came to this understanding of what the strengths or weaknesses were, it was about that point that OpenAI were starting to work on their plugin architecture.
“They approached us early on, because they had a little bit longer to think about this than us, since they’d seen it coming for two years,” McLoone adds. “They understood exactly this issue themselves already.”
McLoone will be demonstrating the plugin with examples at the upcoming AI & Big Data Expo Global event in London on November 30-December 1, where he is speaking. Yet he is keen to stress that there are more varied use cases out there which can benefit from the combination of ChatGPT’s mastery of unstructured language and Wolfram’s mastery of computational mathematics.
One such example is performing data science on unstructured GP medical records. This ranges from correcting peculiar transcriptions on the LLM side – replacing ‘peacemaker’ with ‘pacemaker’ as one example – to using old-fashioned computation and looking for correlations within the data. “We’re focused on chat, because it’s the most amazing thing at the moment that we can talk to a computer. But the LLM is not just about chat,” says McLoone. “They’re really great with unstructured data.”
How does McLoone see LLMs developing in the coming years? There will be various incremental improvements, and training best practices will see better results, not to mention potentially greater speed with hardware acceleration. “Where the big money goes, the architectures follow,” McLoone notes. A sea-change on the scale of the last 12 months, however, can likely be ruled out. Partly because of crippling compute costs, but also because we may have peaked in terms of training sets. If copyright rulings go against LLM providers, then training sets will shrink going forward.
The reliability problem for LLMs, however, will be forefront in McLoone’s presentation. “Things that are computational are where it’s absolutely at its weakest, it can’t really follow rules beyond really basic things,” he explains. “For anything where you’re synthesising new knowledge, or computing with data-oriented things as opposed to story-oriented things, computation really is the way still to do that.”
Yet while responses may vary – one has to account for ChatGPT’s degree of randomness after all – the combination seems to be working, so long as you give the LLM strong instructions. “I don’t know if I’ve ever seen [an LLM] actually override a fact I’ve given it,” says McLoone. “When you’re putting it in charge of the plugin, it often thinks ‘I don’t think I’ll bother calling Wolfram for this, I know the answer’, and it will make something up.
“So if it’s in charge you have to give really strong prompt engineering,” he adds. “Say ‘always use the tool if it’s anything to do with this, don’t try and go it alone’. But when it’s the other way around – when computation generates the knowledge and injects it into the LLM – I’ve never seen it ignore the facts.
“It’s just like the loudmouth guy at the pub – if you whisper the facts in his ear, he’ll happily take credit for them.”

Wolfram will be at AI & Big Data Expo Global. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Wolfram Research: Injecting reliability into generative AI appeared first on AI News.
-
AI News
- Wolfram Research: Injecting reliability into generative AIThe hype surrounding generative AI and the potential of large language models (LLMs), spearheaded by OpenAI’s ChatGPT, appeared at one stage to be practically insurmountable. It was certainly inescapable. More than one in four dollars invested in US startups this year went to an AI-related company, while OpenAI revealed at its recent developer conference that ChatGPT continues to be one of the fastest-growing services of all time. Yet something continues to be amiss. Or rather, something amis
Wolfram Research: Injecting reliability into generative AI
The hype surrounding generative AI and the potential of large language models (LLMs), spearheaded by OpenAI’s ChatGPT, appeared at one stage to be practically insurmountable. It was certainly inescapable. More than one in four dollars invested in US startups this year went to an AI-related company, while OpenAI revealed at its recent developer conference that ChatGPT continues to be one of the fastest-growing services of all time.
Yet something continues to be amiss. Or rather, something amiss continues to be added in.

One of the biggest issues with LLMs are their ability to hallucinate. In other words, it makes things up. Figures vary, but one frequently-cited rate is at 15%-20%. One Google system notched up 27%. This would not be so bad if it did not come across so assertively while doing so. Jon McLoone (left), Director of Technical Communication and Strategy at Wolfram Research, likens it to the ‘loudmouth know-it-all you meet in the pub.’ “He’ll say anything that will make him seem clever,” McLoone tells AI News. “It doesn’t have to be right.”
The truth is, however, that such hallucinations are an inevitability when dealing with LLMs. As McLoone explains, it is all a question of purpose. “I think one of the things people forget, in this idea of the ‘thinking machine’, is that all of these tools are designed with a purpose in mind, and the machinery executes on that purpose,” says McLoone. “And the purpose was not to know the facts.
“The purpose that drove its creation was to be fluid; to say the kinds of things that you would expect a human to say; to be plausible,” McLoone adds. “Saying the right answer, saying the truth, is a very plausible thing, but it’s not a requirement of plausibility.
“So you get these fun things where you can say ‘explain why zebras like to eat cacti’ – and it’s doing its plausibility job,” says McLoone. “It says the kinds of things that might sound right, but of course it’s all nonsense, because it’s just being asked to sound plausible.”
What is needed, therefore, is a kind of intermediary which is able to inject a little objectivity into proceedings – and this is where Wolfram comes in. In March, the company released a ChatGPT plugin, which aims to ‘make ChatGPT smarter by giving it access to powerful computation, accurate math[s], curated knowledge, real-time data and visualisation’. Alongside being a general extension to ChatGPT, the Wolfram plugin can also synthesise code.
“It teaches the LLM to recognise the kinds of things that Wolfram|Alpha might know – our knowledge engine,” McLoone explains. “Our approach on that is completely different. We don’t scrape the web. We have human curators who give the data meaning and structure, and we lay computation on that to synthesise new knowledge, so you can ask questions of data. We’ve got a few thousand data sets built into that.”
Wolfram has always been on the side of computational technology, with McLoone, who describes himself as a ‘lifelong computation person’, having been with the company for almost 32 of its 36-year history. When it comes to AI, Wolfram therefore sits on the symbolic side of the fence, which suits logical reasoning use cases, rather than statistical AI, which suits pattern recognition and object classification.
The two systems appear directly opposed, but with more commonality than you may think. “Where I see it, [approaches to AI] all share something in common, which is all about using the machinery of computation to automate knowledge,” says McLoone. “What’s changed over that time is the concept of at what level you’re automating knowledge.
“The good old fashioned AI world of computation is humans coming up with the rules of behaviour, and then the machine is automating the execution of those rules,” adds McLoone. “So in the same way that the stick extends the caveman’s reach, the computer extends the brain’s ability to do these things, but we’re still solving the problem beforehand.
“With generative AI, it’s no longer saying ‘let’s focus on a problem and discover the rules of the problem.’ We’re now starting to say, ‘let’s just discover the rules for the world’, and then you’ve got a model that you can try and apply to different problems rather than specific ones.
“So as the automation has gone higher up the intellectual spectrum, the things have become more general, but in the end, it’s all just executing rules,” says McLoone.
What’s more, as the differing approaches to AI share a common goal, so do the companies on either side. As OpenAI was building out its plugin architecture, Wolfram was asked to be one of the first providers. “As the LLM revolution started, we started doing a bunch of analysis on what they were really capable of,” explains McLoone. “And then, as we came to this understanding of what the strengths or weaknesses were, it was about that point that OpenAI were starting to work on their plugin architecture.
“They approached us early on, because they had a little bit longer to think about this than us, since they’d seen it coming for two years,” McLoone adds. “They understood exactly this issue themselves already.”
McLoone will be demonstrating the plugin with examples at the upcoming AI & Big Data Expo Global event in London on November 30-December 1, where he is speaking. Yet he is keen to stress that there are more varied use cases out there which can benefit from the combination of ChatGPT’s mastery of unstructured language and Wolfram’s mastery of computational mathematics.
One such example is performing data science on unstructured GP medical records. This ranges from correcting peculiar transcriptions on the LLM side – replacing ‘peacemaker’ with ‘pacemaker’ as one example – to using old-fashioned computation and looking for correlations within the data. “We’re focused on chat, because it’s the most amazing thing at the moment that we can talk to a computer. But the LLM is not just about chat,” says McLoone. “They’re really great with unstructured data.”
How does McLoone see LLMs developing in the coming years? There will be various incremental improvements, and training best practices will see better results, not to mention potentially greater speed with hardware acceleration. “Where the big money goes, the architectures follow,” McLoone notes. A sea-change on the scale of the last 12 months, however, can likely be ruled out. Partly because of crippling compute costs, but also because we may have peaked in terms of training sets. If copyright rulings go against LLM providers, then training sets will shrink going forward.
The reliability problem for LLMs, however, will be forefront in McLoone’s presentation. “Things that are computational are where it’s absolutely at its weakest, it can’t really follow rules beyond really basic things,” he explains. “For anything where you’re synthesising new knowledge, or computing with data-oriented things as opposed to story-oriented things, computation really is the way still to do that.”
Yet while responses may vary – one has to account for ChatGPT’s degree of randomness after all – the combination seems to be working, so long as you give the LLM strong instructions. “I don’t know if I’ve ever seen [an LLM] actually override a fact I’ve given it,” says McLoone. “When you’re putting it in charge of the plugin, it often thinks ‘I don’t think I’ll bother calling Wolfram for this, I know the answer’, and it will make something up.
“So if it’s in charge you have to give really strong prompt engineering,” he adds. “Say ‘always use the tool if it’s anything to do with this, don’t try and go it alone’. But when it’s the other way around – when computation generates the knowledge and injects it into the LLM – I’ve never seen it ignore the facts.
“It’s just like the loudmouth guy at the pub – if you whisper the facts in his ear, he’ll happily take credit for them.”

Wolfram will be at the AI & Big Data Expo. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Wolfram Research: Injecting reliability into generative AI appeared first on AI News.
-
Quantum Zeitgeist
- Cleveland Clinic and IBM Launch Quantum Innovation Program for Healthcare Start-upsCleveland Clinic has launched the Quantum Innovation Catalyzer Program, a competitive initiative for start-ups to explore quantum computing applications in healthcare and life sciences. Four companies will be selected to receive a 24-week immersive experience, including access to IBM Quantum System One computer for research. The program is part of Cleveland Clinic’s and IBM’s 10-year partnership aimed at advancing biomedical research through quantum and advanced computing. The application for th
Cleveland Clinic and IBM Launch Quantum Innovation Program for Healthcare Start-ups
-
Quantum Zeitgeist
- Artificial Intelligence Risks: How the US Aims to make AI Safe, Secure, and TrustworthyPresident Biden has issued an Executive Order to establish new standards for AI safety and security, protect privacy, advance equity and civil rights, and promote innovation and competition. The order requires developers of powerful AI systems to share safety test results with the U.S. government. The National Institute of Standards and Technology will set rigorous standards for testing AI systems. The Departments of Energy and Homeland Security will address AI threats to critical infrastructure
Artificial Intelligence Risks: How the US Aims to make AI Safe, Secure, and Trustworthy
-
AI News
- BSI: Closing ‘AI confidence gap’ key to unlocking benefitsThe UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public. According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust. This scepticism could hinder the integration of AI te
BSI: Closing ‘AI confidence gap’ key to unlocking benefits
The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public.
According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust.
This scepticism could hinder the integration of AI technologies in the NHS, which is currently grappling with challenges like the COVID-19 backlog and an ageing population. Almost half of Britons (49%) support the use of AI to alleviate pressure on the healthcare system and reduce waiting times. However, only 20 percent have more confidence in AI than humans in detecting food contamination issues.
The study also highlighted a pressing need for education, as 65 percent of respondents felt patients should be informed about the use of AI tools in diagnosis or treatment. 37 percent of respondents expect to use AI regularly in medical settings by 2030.
Craig Civil, Director of Data Science and AI at BSI, said:
“The magnitude of ways AI can shape the UK’s future means we are seeing some degree of hesitation of the unknown. This can be addressed by developing greater understanding and recognition that human involvement will always be needed if we are to make the best use of this technology, and by ensuring we have frameworks that are in place to govern its use and build trust.
Now is the moment for the UK to collaborate to balance the great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.
Closing the confidence gap and building the appropriate checks and balances can enable us to make not just good but great use of AI in every area of life and society.”
60 percent believed consumers needed protections regarding AI technologies. The study also revealed that 61 percent of Britons are calling for international guidelines to ensure the safe use of AI. This demand reflects a global sentiment, with 50 percent of respondents highlighting the need for ethical safeguards on patient data use.
Harold Pradal, Chief Commercial Officer at BSI, commented:
“AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. There is a clear opportunity to harness AI to drive societal impact, change lives, and accelerate progress towards a better future and a sustainable world.
Closing the AI confidence gap is the first necessary step, it has to be delivered through education to help realise AI’s benefits and shape Society 5.0 in a positive way.”
The study’s findings are a call to action for the UK, urging collaboration and the establishment of frameworks to govern AI’s use.
The UK Government, recognising the importance of safe AI implementation, is set to host a global AI Safety Summit at the historic Bletchley Park on 1-2 November 2023. BSI is an official partner for the much-anticipated event.
(Photo by Suad Kamardeen on Unsplash)
See also: UK reveals AI Safety Summit opening day agenda

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.
-
High-Performance Computing News Analysis | insideHPC
- Revolutionizing Bioscience Research: Creating an Atlas of the Human BodyMaking healthcare and life science (HCLS) discoveries is time-consuming and requires considerable amounts of data. HPC enterprise infrastructure with AI and edge to cloud capabilities is required for biomedical research to make creating a human atlas of the body possible. The HPE, NVIDIA and Flywheel collaboration using the latest technologies designed for HCLS promise to transform.... The post Revolutionizing Bioscience Research: Creating an Atlas of the Human Body appeared first on High-Perfor
Revolutionizing Bioscience Research: Creating an Atlas of the Human Body

Making healthcare and life science (HCLS) discoveries is time-consuming and requires considerable amounts of data. HPC enterprise infrastructure with AI and edge to cloud capabilities is required for biomedical research to make creating a human atlas of the body possible. The HPE, NVIDIA and Flywheel collaboration using the latest technologies designed for HCLS promise to transform....
The post Revolutionizing Bioscience Research: Creating an Atlas of the Human Body appeared first on High-Performance Computing News Analysis | insideHPC.
-
Quantum Zeitgeist
- Pasqal and Qubit Pharmaceuticals Win $4.5M for Quantum Drug DiscoveryFrench start-ups PASQAL and Qubit Pharmaceuticals have partnered with the Unitary Fund to win the Wellcome Trust's "Quantum for Bio" program. The project aims to design a new quantum algorithm to accelerate drug discovery, which will be implemented on PASQAL's quantum computers. The consortium will receive $4.5 million in funding over 30 months.
Pasqal and Qubit Pharmaceuticals Win $4.5M for Quantum Drug Discovery
-
Quantum Zeitgeist
- Classiq Technologies Expands Quantum Computing Software Operations to Boston, Tapping into Local TalentClassiq Technologies, a quantum computing software company, has announced its expansion into the Boston area. The company's vice president of strategic partnerships, Shai Lev, will relocate to lead the company's growth in North America. The expansion is a strategic move to tap into Boston's thriving quantum ecosystem, renowned academic institutions, and key industries pursuing quantum applications.
Classiq Technologies Expands Quantum Computing Software Operations to Boston, Tapping into Local Talent
-
Quantum Zeitgeist
- £15M Quantum Catalyst Fund to Revolutionise UKThe UK government has announced the first winners of a £15 million competition, the Quantum Catalyst Fund, aimed at exploring the benefits of using quantum technologies across various sectors such as health, transport, and net zero.