• Bancor files patent infringement lawsuit against Uniswap over…
  • Will Lee Jae-myung’s Won-Backed Stablecoin Stem Korea’s $40.8B…
  • Vivek Ramswamy’s Strive Aims for Discounted Mt. Gox…
  • SEC crypto task force to release first report…
  • Bancor files patent infringement lawsuit against Uniswap over…
  • Will Lee Jae-myung’s Won-Backed Stablecoin Stem Korea’s $40.8B…
  • Vivek Ramswamy’s Strive Aims for Discounted Mt. Gox…
  • SEC crypto task force to release first report…
  • Bancor files patent infringement lawsuit against Uniswap over…
  • Will Lee Jae-myung’s Won-Backed Stablecoin Stem Korea’s $40.8B…
  • Vivek Ramswamy’s Strive Aims for Discounted Mt. Gox…
  • SEC crypto task force to release first report…
Lets Talk Web3 Your trusted source for all things Web3
  • Latest Post
    • Bitcoin News
    • Ethereum News
    • Altcoin News
    • Blockchain News
  • About Us
  • AI News
  • Press Release
  • NFT News
  • Market Analysis
☰
Lets Talk Web3

We also offer the following services:

👉Global Media Coverage: We secure top-tier media placements worldwide. Need specific media houses? Let’s discuss your targets.
👉Content Strategies & Management: From crafting compelling narratives to managing your content, we ensure your message resonates.
👉Shilling Services: Drive constant visibility with strategic Twitter and Binance Square posts.
👉Organic Engagement Boosters: Amplify your presence on Twitter and Telegram with authentic, organic engagement.
👉Exchange Listings: We facilitate smooth and strategic exchange listings to help you reach the right markets.
👉Performance Marketing: Target Web3-focused websites with precision marketing that delivers results.
👉KOL (Key Opinion Leader) Partnerships: With connections to over 5,000 KOLs across various platforms, we can craft a strategy that suits your audience and goals.

Block a time here- https://lnkd.in/g7iCgq_b or email at Contact@letstalkweb3.com

Red Hat on open, small language models for responsible, practical AI

Nitin Gupta - AI News - April 22, 2025
Red Hat on open, small language models for responsible, practical AI
Nitin Gupta Founder of LetsTalkWeb3.com, a full fledged media house for everything Web3.…
10 views 9 mins 0 Comments


As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise.

The expectations of results from AI are balanced at present with real-world realities. And there remains a good deal of suspicion about the technology, again in balance with those who are embracing it even in its current nascent stages. The closed-loop nature of the well-known LLMs is being challenged by instances like Llama, DeepSeek, and Baidu’s recently-released Ernie X1.

In contrast, open source development provides transparency and the ability to contribute back, which is more in tune with the desire for “responsible AI”: a phrase that encompasses the environmental impact of large models, how AIs are used, what comprises their learning corpora, and issues around data sovereignty, language, and politics. 

As the company that’s demonstrated the viability of an economically-sustainable open source development model for its business, Red Hat wants to extend its open, collaborative, and community-driven approach to AI. We spoke recently to Julio Guijarro, the CTO for EMEA at Red Hat, about the organisation’s efforts to unlock the undoubted power of generative AI models in ways that bring value to the enterprise, in a manner that’s responsible, sustainable, and as transparent as possible. 

Julio underlined how much education is still needed in order for us to more fully understand AI, stating, “Given the significant unknowns about AI’s inner workings, which are rooted in complex science and mathematics, it remains a ‘black box’ for many. This lack of transparency is compounded where it has been developed in largely inaccessible, closed environments.”

There are also issues with language (European and Middle-Eastern languages are very much under-served), data sovereignty, and fundamentally, trust. “Data is an organisation’s most valuable asset, and businesses need to make sure they are aware of the risks of exposing sensitive data to public platforms with varying privacy policies.” 

The Red Hat response 

Red Hat’s response to global demand for AI has been to pursue what it feels will bring most benefit to end-users, and remove many of the doubts and caveats that are quickly becoming apparent when the de facto AI services are deployed. 

One answer, Julio said, is small language models, running locally or in hybrid clouds, on non-specialist hardware, and accessing local business information. SLMs are compact, efficient alternatives to LLMs, designed to deliver strong performance for specific tasks while requiring significantly fewer computational resources. There are smaller cloud providers that can be utilised to offload some compute, but the key is having the flexibility and freedom to choose to keep business-critical information in-house, close to the model, if desired. That’s important, because information in an organisation changes rapidly. “One challenge with large language models is they can get obsolete quickly because the data generation is not happening in the big clouds. The data is happening next to you and your business processes,” he said. 

There’s also the cost. “Your customer service querying an LLM can present a significant hidden cost – before AI, you knew that when you made a data query, it had a limited and predictable scope. Therefore, you could calculate how much that transaction could cost you. In the case of LLMs, they work on an iterative model. So the more you use it, the better its answer can get, and the more you like it, the more questions you may ask. And every interaction is costing you money. So the same query that before was a single transaction can now become a hundred, depending on who and how is using the model. When you are running a model on-premise, you can have greater control, because the scope is limited by the cost of your own infrastructure, not by the cost of each query.”

Organisations needn’t brace themselves for a procurement round that involves writing a huge cheque for GPUs, however. Part of Red Hat’s current work is optimising models (in the open, of course) to run on more standard hardware. It’s possible because the specialist models that many businesses will use don’t need the huge, general-purpose data corpus that has to be processed at high cost with every query. 

“A lot of the work that is happening right now is people looking into large models and removing everything that is not needed for a particular use case. If we want to make AI ubiquitous, it has to be through smaller language models. We are also focused on supporting and improving vLLM (the inference engine project) to make sure people can interact with all these models in an efficient and standardised way wherever they want: locally, at the edge or in the cloud,” Julio said. 

Keeping it small 

Using and referencing local data pertinent to the user means that the outcomes can be crafted according to need. Julio cited projects in the Arab- and Portuguese-speaking worlds that wouldn’t be viable using the English-centric household name LLMs. 

There are a couple of other issues, too, that early adopter organisations have found in practical, day-to-day use LLMs. The first is latency – which can be problematic in time-sensitive or customer-facing contexts. Having the focused resources and relevantly-tailored results just a network hop or two away makes sense. 

Secondly, there is the trust issue: an integral part of responsible AI. Red Hat advocates for open platforms, tools, and models so we can move towards greater transparency, understanding, and the ability for as many people as possible to contribute. “It is going to be critical for everybody,” Julio said. “We are building capabilities to democratise AI, and that’s not only publishing a model, it’s giving users the tools to be able to replicate them, tune them, and serve them.” 

Red Hat recently acquired Neural Magic to help enterprises more easily scale AI, to improve performance of inference, and to provide even greater choice and accessibility of how enterprises build and deploy AI workloads with the vLLM project for open model serving. Red Hat, together with IBM Research, also released InstructLab to open the door to would-be AI builders who aren’t data scientists but who have the right business knowledge. 

There’s a great deal of speculation around if, or when, the AI bubble might burst, but such conversations tend to gravitate to the economic reality that the big LLM providers will soon have to face. Red Hat believes that AI has a future in a use case-specific and inherently open source form, a technology that will make business sense and that will be available to all. To quote Julio’s boss, Matt Hicks (CEO of Red Hat), “The future of AI is open.” 

Supporting Assets: 

Tech Journey: Adopt and scale AI



Source link

TAGS:
PREVIOUS
US judge transfers Binance lawsuit to Florida, citing first-to-file rule
NEXT
ETH/BTC ratio hits 5-year low as Ethereum below $1,600?
Related Post
Mark Lockett, SS&C Blue Prism: Enhancing human capabilities with digital workforces
September 25, 2024
Mark Lockett, SS&C Blue Prism: Enhancing human capabilities with digital workforces
Centralised AI is dangerous: how can we stop it?
November 8, 2024
Centralised AI is dangerous: how can we stop it?
Gemma 3 illustration from Google with some capabilities of the open source AI model in the background as the latest models aim to set a new benchmark for AI accessibility and enable developers to create applications across a wide range of devices.
March 12, 2025
Google launches its latest open AI models
Beyond acceleration: the rise of Agentic AI
April 7, 2025
Beyond acceleration: the rise of Agentic AI
Comments are closed.

With a global network of contributors, LetsTalkWeb3 is committed to providing high-quality content that serves both newcomers and seasoned professionals. Whether you’re an investor, developer, or simply curious about the future of the internet, LetsTalkWeb3 is your trusted source for all things Web3

Scroll To Top
  • Home
  • About Us
  • AI News
  • Press Release
  • NFT News
  • Market Analysis
© Copyright 2025 - Lets Talk Web3 . All Rights Reserved
bitcoin
Bitcoin (BTC) $ 105,738.33
ethereum
Ethereum (ETH) $ 2,472.36
tether
Tether (USDT) $ 1.00
xrp
XRP (XRP) $ 2.33
bnb
BNB (BNB) $ 645.18
solana
Solana (SOL) $ 165.84
usd-coin
USDC (USDC) $ 1.00
dogecoin
Dogecoin (DOGE) $ 0.220086
cardano
Cardano (ADA) $ 0.729467
tron
TRON (TRX) $ 0.273679
bitcoin
Bitcoin (BTC) $ 105,738.33
ethereum
Ethereum (ETH) $ 2,472.36
tether
Tether (USDT) $ 1.00
xrp
XRP (XRP) $ 2.33
bnb
BNB (BNB) $ 645.18
solana
Solana (SOL) $ 165.84
usd-coin
USDC (USDC) $ 1.00
dogecoin
Dogecoin (DOGE) $ 0.220086
cardano
Cardano (ADA) $ 0.729467
tron
TRON (TRX) $ 0.273679