Why the Network Matters to Generative AI
Applications, today, are distributed. Our core research tells us more than half
(60%) of organizations operate hybrid applications; that is, with components
deployed in core, cloud, and edge locations. That makes the Internet their
network, and the lifeline upon which they depend for speed and, ultimately,
security. Furthermore, our focused research tells us that organizations are
already multi-model, on average deploying 2.9 models. And where are those models
going? Just over one-third (35%) are deploying in both public cloud and
on-premises. Applications that use those models, of course, are being
distributed in both environments. According to Red Hat, some of those models are
being used to facilitate the modernization of legacy applications. ... One is
likely tempted to ask why we need such a thing. The problem is we can’t affect
the Internet. Not really. For all our attempts to use QoS to prioritize traffic
and carefully select the right provider, who has all the right peering points,
we can’t really do much about it. For one thing, over-the-Internet connectivity
doesn’t typically reach into another environment, in which there are all kinds
of network challenges like overlapping IP addresses, not to mention the
difficulty in standardizing security policies and monitoring network
activity.
Aware of what tech debt costs them, CIOs still can’t make it an IT priority
The trick for CIOs who have significant tech debt is to sell it to organization
leadership, he says. One way to frame the need to address tech debt is to tie it
to IT modernization. “You can’t modernize without addressing tech debt,” Saroff
says. “Talk about digital transformation.” ... “You don’t just say, ‘We’ve got
an old ERP system that is out of vendor support,’ because they’ll argue, ‘It
still works; it’s worked fine for years,’” he says. “Instead, you have to say,
‘We need a new ERP system because you have this new customer intimacy program,
and we’ll either have to spend millions of dollars doing weird integrations
between multiple databases, or we could upgrade the ERP.’” ... “A lot of it gets
into even modernization as you’re building new applications and new software,”
he says. “Oftentimes, if you’re interfacing with older platforms that have
sources of data that aren’t modernized, it can make those projects delayed or
more complicated.” As organizational leaders push CIOs to launch AI projects, an
overlooked area of tech debt is data management, adds Ricardo Madan, senior vice
president for global technology services at IT consulting firm TEKsystems.
Is efficiency on your cloud architect’s radar?
Remember that we can certainly measure the efficiency of each of the
architecture’s components, but that only tells you half of the story. A system
may have anywhere from 10 to 1,000 components. Together, they create a converged
architecture, which provides several advantages in measuring and ensuring
efficiency. Converged architectures facilitate centralized management by
combining computing, storage, and networking resources. ... With an integrated
approach, converged architectures can dynamically distribute resources based on
real-time demand. This reduces idle resources and enhances utilization, leading
to better efficiency. Automation tools embedded within converged architectures
help automate routine tasks such as scaling, provisioning, and load balancing.
These tools can adjust resource allocation in real time, ensuring optimal
performance without manual intervention. Advanced monitoring tools and analytics
platforms built into converged architectures provide detailed insights into
resource usage, cost patterns, and performance metrics. This enables continuous
optimization and proactive management of cloud resources.
ITSM concerns when integrating new AI services
The key to establishing stringent access controls lies in feeding each LLM only
the information that its users should consume. This approach eliminates the
concept of a generalist LLM fed with all the company’s information, thereby
ensuring that access to data is properly restricted and aligned with user roles
and responsibilities. ... To maintain strict control over sensitive data while
leveraging the benefits of AI, organizations should adopt a hybrid approach that
combines AI-as-a-Service (AIaaS) with self-hosted models. For tasks involving
confidential information, such as financial analysis and risk assessment,
deploying self-hosted AI models ensures data security and control. Meanwhile,
utilizing AIaaS providers like AWS for less sensitive tasks, such as predictive
maintenance and routine IT support, allows organizations to benefit from the
scalability and advanced features offered by cloud-based AI services. This
hybrid strategy ensures that sensitive data remains secure within the
organization’s infrastructure while taking advantage of the innovation and
efficiency provided by AIaaS for other operations.
Fighting Back Against Multi-Staged Ransomware Attacks Crippling Businesses
Ransomware has evolved from lone wolf hackers operating from basements to
complex organized crime syndicates that operate just like any other
professional organization. Modern ransomware gangs employ engineers that
develop the malware and platform; employ help desk staff to answer technical
queries; employ analysts that identify target organizations; and ironically,
employ PR pros for crisis management. The ransomware ecosystem also comprises
multiple groups with specific roles. For example, one group (operators) builds
and maintains the malware and rents out their infrastructure and expertise
(a.k.a. ransomware-as-a-service). Initial access brokers specialize in
breaking into organizations and selling the acquired access, data, and
credentials. Ransomware affiliates execute the attack, compromise the victim,
manage negotiations, and share a portion of their profits with the operators.
Even state-sponsored attackers have joined the ransomware game due to its
potential to cause wide-scale disruption and because it is very lucrative.
Optimizing Software Quality: Unit Testing and Automation
Any long-term project without proper test coverage is destined to be rewritten
from scratch sooner or later. Unit testing is a must-have for the majority of
projects, yet there are cases when one might omit this step. For example, you
are creating a project for demonstrational purposes. The timeline is very
tough. Your system is a combination of hardware and software, and at the
beginning of the project, it's not entirely clear what the final product will
look like. ... in automation testing the test cases are executed
automatically. It happens much faster than manual testing and can be carried
out even during nighttime as the whole process requires minimum human
interference. This approach is an absolute game changer when you need to get
quick feedback. However, as with any automation, it may need substantial time
and financial resources during the initial setup stage. Even so, it is totally
worth using it, as it will make the whole process more efficient and the code
more reliable. The first step here is to understand if the project
incorporates test automation. You need to ensure that the project has a robust
test automation framework in place.
In the age of gen AI upskilling, learn and let learn
Gen AI upskilling is neither a one-off endeavor nor a quick fix. The
technology’s sophistication and ongoing evolution requires dedicated
educational pathways powered by continuous learning opportunities and
financial support. So, as leaders, we need to provide resources for employees
to participate in learning opportunities (that is, workshops), attend
third-party courses offered by groups like LinkedIn, or receive tuition
reimbursements for upskilling opportunities found independently. We must also
ensure that these resources are accessible to our entire employee base,
regardless of the nature of an employee’s day-to-day role. From there, you can
institutionalize mechanisms for documenting and sharing learnings. This
includes building and popularizing communication avenues that motivate
employees to share feedback, learn together and surface potential roadblocks.
Encouraging a healthy dialogue around learning, and contributing to these
conversations yourself, often leads to greater innovation across your
organization. At my company, we tend to blend the learning and sharing
together.
Embracing Technology: Lessons Business Leaders Can Learn from Sports Organizations
To maintain their competitive edge, sports organizations are undertaking
comprehensive digital transformations. Digital technologies are integrated
across all facets of operations, transforming people, processes, and
technology. Data analytics guide decisions in areas such as player
recruitment, game strategies, and marketing efforts. ... The convergence
of sports and technology reveals new business opportunities. Sponsorships from
technology companies showcase their capabilities to targeted audiences and
open up new markets. Innovations in sports technology, such as advanced
training equipment and analytical tools, are driving unprecedented
possibilities. By embracing these insights, business leaders can unlock new
avenues for growth and innovation in their own industries. Partnering with
technology firms can lead to the development of new products, services, and
market opportunities, ensuring sustained success and relevance in an
ever-evolving business landscape.
Containerization Can Render Apps More Agile Painlessly
Application development and deployment methods will change because the app
developer no longer has to think about the integration of an app with an
underlying operating system and associated infrastructure. This is because the
container already has the correct configuration of all these elements. If an
app developer wants to immediately install his app in both Linux and Windows
environments, he can do it. ... Most IT staff have found that they need
specialized tools for container management, and that they can’t use the tools
that they are accustomed to. Companies like Kubernetes, Dynatrace, and Docker
all provide container management tools, but mastering these tools requires IT
staff to be trained on the tools. Security and governance also present
challenges in the container environment because each container uses its own
operating system kernel. If an OS security vulnerability is discovered, the OS
kernel images across all containers must be synchronously fixed with a
security patch to resolve the vulnerability. In cases like this, it’s ideal to
have a means of automating the fix process, but it might be necessary to do it
manually at first.
Can AI even be open source? It's complicated
Clearly, we need to devise an open-source definition that fits AI programs to
stop these faux-source efforts in their tracks. Unfortunately, that's easier
said than done. While people constantly fuss over the finer details of what's
open-source code and what isn't, the Open Source Initiative (OSI) has nailed
down the definition, the Open Source Definition (OSD), for almost twenty
years. The convergence of open source and AI is much more complicated. In
fact, Joseph Jacks, founder of the Venture Capitalist (VC) business FOSS
Capital, argued there is "no such thing as open-source AI" since "open source
was invented explicitly for software source code." It's true. In addition,
open-source's legal foundation is copyright law. As Jacks observed, "Neural
Net Weights (NNWs) [which are essential in AI] are not software source code --
they are unreadable by humans, nor are they debuggable." As Stefano Maffulli,
OSI executive director, has told me, software and data are mixed in AI, and
existing open-source licenses are breaking down. Specifically, trouble emerges
when all that data and code are merged in AI/ML artifacts -- such as datasets,
models, and weights.
Quote for the day:
"Leadership does not depend on being
right." -- Ivan Illich
No comments:
Post a Comment