4 considerations to help organizations implement an AI code of conducts
Many organizations consider reinventing the wheel to accommodate AI tools, but
this creates a significant amount of unnecessary work. Instead, they should
subject any AI tool to the same rigorous procurement process that applies to
any product that concerns data security. The procurement process must also
take into consideration the organization’s privacy and ethical standards, to
ensure these are never compromised in the name of new technology. ... It’s
important to be conscious of the privacy policies of AI tools when using these
in an enterprise environment — and be sure to only use these with a commercial
license. To address this risk, an AI code of conduct should stipulate that
free tools are categorically banned for use in any business context. Instead,
employees should be required to use an approved, officially procured
commercial license solution, with full privacy protections. ... Every
organization needs to remain aware of how their technology vendors use AI in
the products and services that they buy from them. To enable this, an AI code
of conduct should also enforce policies to enable organizations to keep track
of their vendor agreements.
From Microservices to Modular Monoliths
You know who really loves microservices? Cloud hosting companies like
Microsoft, Amazon, and Google. They make a lot of money hosting microservices.
They also make a lot of money selling you tools to manage your microservices.
They make even more money when you have to scale up your microservices to
handle the increased load on your system. ... So what do you do when you find
yourself in microservice hell? How do you keep the gains you (hopefully) made
in breaking up your legacy ball of mud, without having to constantly contend
with a massively distributed system? It may be time to (re)consider the
modular monolith. A modular monolith is a monolithic application that is
broken up into modules. Each module is responsible for a specific part of the
application. Modules can communicate with each other through well-defined
interfaces. This allows you to keep the benefits of a monolithic architecture,
while still being able to break up your application into smaller, more
manageable pieces. Yes, you'll still need to deal with some complexity
inherent to modularity, such as ensuring modules remain independent while
still being able to communicate with one another efficiently.
Deep Dive: Optimizing AI Data Storage Management
In an AI data pipeline, various stages align with specific storage needs to
ensure efficient data processing and utilization. Here are the typical stages
along with their associated storage requirements: Data collection and
pre-processing: The storage where the raw and often unstructured data is
gathered and centralized (increasingly into Data Lakes) and then cleaned and
transformed into curated data sets ready for training processes. Model
training and processing: The storage that feeds the curated data set into GPUs
for processing. This stage of the pipeline also needs to store training
artifacts such as the hyper parameters, run metrics, validation data, model
parameters and the final production inferencing model. Inferencing and
model deployment: The mission-critical storage where the training model is
hosted for making predictions or decisions based on new data. The outputs of
inferencing are utilized by applications to deliver the results, often
embedded into information and automation processes. Storage for archiving:
Once the training stage is complete, various artifacts such as different sets
of training data and different versions of the model need to be stored
alongside the raw data.
RAG (Retrieval Augmented Generation) Architecture for Data Quality Assessment
RAG is basically designed to leverage LLMs on your own content or data. It
involves retrieving relevant content to augment the context or insights as
part of the generation process. However, RAG is an evolving technology with
both strengths and limitations. RAG integrates information retrieval from a
dedicated, custom, and accurate knowledge base, reducing the risk of LLMs
offering general or non-relevant responses. For example, when the knowledge
base is tailored to a specific domain (e.g., legal documents for a law firm),
RAG equips the LLM with relevant information and terminology, improving the
context and accuracy of its responses. At the same time, there are limitations
associated with RAG. RAG heavily relies on the quality, accuracy, and
comprehensiveness of the information stored within the knowledge base.
Incomplete, inaccurate or missing information or data can lead to misleading
or irrelevant retrieved data. Overall, the success of RAG hinges on quality
data. So, how are RAG models implemented? RAG has basically two key
components: a retriever model and a generator model.
NoSQL Database Growth Has Slowed, but AI Is Driving Demand
As for MongoDB, it too is targeting generative AI use cases. In a recent post
on The New Stack, developer relations team lead Rick Houlihan explicitly
compared its solution to PostgreSQL, a popular open source relational database
system. Houlihan contended that systems like PostgreSQL were not designed for
the type of workloads demanded by AI: “Considering the well-known performance
limitations of RDBMS when it comes to wide rows and large data attributes, it
is no surprise that these tests indicate that a platform like PostgreSQL will
struggle with the kind of rich, complex document data required by generative
AI workloads.” Unsurprisingly, he concludes that using a document database
(like MongoDB) “delivers better performance than using a tool that simply
wasn’t designed for these workloads.” In defense of PostgreSQL, there is no
shortage of managed service providers for Postgres that provide AI-focused
functionality. Earlier this year I interviewed a “Postgres as a Platform”
company called Tembo, which has seen a lot of demand for AI extensions.
“Postgres has an extension called pgvector,” Tembo CTO Samay Sharma told
me.
Let’s Finally Build Continuous Database Reliability! We Deserve It
While we worked hard to make sure our CI/CD pipelines are fast and learned how
to deploy and test applications reliably, we didn’t advance our databases
world. It’s time to get continuous reliability around databases as well. To do
that, developers need to own their databases. Once developers take over the
ownership, they will be ready to optimize the pipelines, thereby achieving
continuous reliability for databases. This shift of ownership needs to be
consciously driven by technical leaders. ... The primary advantage of
implementing database guardrails and empowering developers to take ownership
of their databases is scalability. This approach eliminates team constraints,
unlocking their complete potential and enabling them to operate at their
optimal speed. By removing the need to collaborate with other teams that lack
comprehensive context, developers can work more swiftly, reducing
communication overhead. Just as we recognized that streamlining communication
between developers and system engineers was the initial step, leading to the
evolution into DevOps engineers, the objective here is to eliminate dependence
on other teams.
Digital Transformation: Making Information Work for You
With information generated by digital transactions, the first goal is to
ensure that the knowledge garnered does not get stuck between only those
directly participating in the transaction. Lessons learned from the
transaction should become part of the greater organizational memory. This does
not mean that every single transaction needs to be reported to every person in
the organization. It also doesn’t mean that the information needs to be
elevated in the same form or at the same velocity to all recipients. Those
participating in the transaction need an operational view of the transaction.
This needs to happen in real time. The information is the enabler of the
human-to-computer-to-human transaction and the speed of that information flow
needs to be as quick as it was in the human-to-human transaction. Otherwise,
it will be viewed as a roadblock instead of an enabler. As it escalates to the
next level of management, the information needs to evolve to a managerial
view. Managers are more interested in anomalies and outliers or data at a
summary level. This level of information is no less impactful to the
organizational memory but is associated with a different level of
decision-making.
Generative AI won’t fix cloud migration
The allure of generative AI lies in its promise of automation and efficiency.
If cloud migration was a one-size-fits-all scenario, that would work. But each
enterprise faces unique challenges based on its technological stack, business
requirements, and regulatory environment. Expecting a generative AI model to
handle all migration tasks seamlessly is unrealistic. ... Beyond the initial
investment in AI tools, the hidden costs of generative AI for cloud migration
add up quickly. For instance, running generative AI models often requires
substantial computational resources, which can be expensive. Also, keeping
generative AI models updated and secure demands robust API management and
cybersecurity measures. Finally, AI models need continual refinement and
retraining to stay relevant, incurring ongoing costs. ... Successful business
strategy is about what works well and what needs to be improved. We all
understand that AI is a powerful tool and has been for decades, but it needs
to be considered carefully—once you’ve identified the specific problem you’re
looking to solve. Cloud migration is a complex, multifaceted process that
demands solutions tailored to unique enterprise needs.
Navigating Regulatory and Technological Shifts in IIoT Security
Global regulations play a pivotal role in shaping the cybersecurity landscape
for IIoT. The European Union’s Cyber Resilience Act (CRA) is a prime example,
setting stringent requirements for manufacturers supplying products to Europe.
By January 2027, companies must meet comprehensive standards addressing
security features, vulnerability management, and supply chain security. ...
The journey towards securing IIoT environments is multifaceted, requiring
manufacturers to navigate regulatory requirements, technological advancements,
and proactive risk management strategies. Global regulations like the EU’s
Cyber Resilience Act set critical standards that drive industry-wide
improvements. At the same time, technological solutions such as PKI and SBOMs
play essential roles in maintaining the integrity and security of connected
devices. By adopting a collaborative approach and leveraging robust security
frameworks, manufacturers can create resilient IIoT ecosystems that withstand
evolving cyber threats. The collective effort of all stakeholders is paramount
to ensuring the secure and reliable operation of industrial environments in
this new era of connectivity.
Green Software Foundation: On a mission to decarbonize software
One of the first orders of business in increasing awareness: getting
developers and companies to understand what green software really is. Instead
of reinventing the wheel, the foundation reviewed a course in the concepts of
green software that Hussain had developed while at Microsoft. To provide an
easy first step for organizations to take, the foundation borrowed from
Hussain’s materials and created a new basic training course, “Principles of
Green Software Engineering.” The training is only two or three hours long and
level-sets students to the same playing field. ... When it comes to software
development, computing inefficiencies (and carbon footprints) are more visible
— bulky libraries for example — and engineers can improve it more easily.
Everyday business operations, on the other hand, are a tad opaque but still
contribute to the company’s overall sustainability score. Case in point: The
carbon footprint of a Zoom call is harder to measure, Hussain points out. The
foundation helped to define a Software Carbon Intensity (SCI) score, which
applies to all business operations including software development and SaaS
programs employees might use.
Quote for the day:
"Real leadership is being the person
others will gladly and confidently follow." -- John C. Maxwell
No comments:
Post a Comment