How the infrastructure industry is leveraging AI and digital twins
The challenges in scaling up the adoption of AI-powered digital twins across the
infrastructure sector are multifaceted. First, engineering firms often struggle
to obtain clear requirements from owner-operators. While these firms manage
design and sometimes construction, they rely on owner-operators to request a
digital twin as part of the final infrastructure asset. However, this
willingness to adopt digital twins is still lacking in some regions and sectors.
Second, many engineering firms need more support due to the high demand for
infrastructure. Cumins emphasizes, “This resource constraint makes it more
difficult for firms to invest in and effectively implement AI-powered digital
twins.” The increasing backlog of projects leaves little time for firms to adopt
new technologies and change their workflows. The third and more fundamental
challenge is access to historical data, which is crucial for training AI models.
“For instance,” Cumins explains, “we train our AI agents using Bentley’s
software, which teaches the rules of various engineering disciplines, such as
structural and geotechnical engineering. Engineering firms can then fine-tune
these AI agents using their historical data and project conditions.”
Serverless computing’s second act
Despite its early hurdles, serverless computing has bounced back, driven by a
confluence of evolving developer needs and technological advancements. Major
cloud providers such as AWS, Microsoft Azure, and Google Cloud have poured
substantial resources into serverless technologies to provide enhancements that
address earlier criticisms. For instance, improvements in debugging tools,
better handling of cold starts, and new monitoring capabilities are now part of
the serverless ecosystem. Additionally, integrating artificial intelligence and
machine learning promises to expand the possibilities of serverless
applications, making them seem more innovative and responsive. ... One crucial
question remains: Is this resurgence enough to secure the future of serverless
computing, or is it simply an attempt by cloud providers to recoup their
significant investments? At issue is the number of enterprises that have
invested in serverless application development. As you know, this investment
goes beyond just paying for the serverless technology. Localizing your
applications using this tech and moving to other platforms is costly. A
temporary fix might not suffice in the long run. While the current trends and
forecasts are promising, the final verdict will largely depend on how serverless
can overcome past weaknesses and adapt to emerging technological landscapes and
enterprise needs.
GenAI’s Impact on Cybersecurity
GenAI is both a blessing and a curse when it comes to cybersecurity. “On the one
hand, the incorporation of AI into security tools and technologies has greatly
enhanced vendor tooling to provide better threat detection and response through
AI-driven features that can analyze vast amounts of data, far quicker than ever
before, to identify patterns and anomalies that signal cyber threats,” says Erik
Avakian, technical counselor at Info-Tech Research Group. “These new features
can help predict new attack vectors, detect malware, vulnerabilities, phishing
patterns and other attacks in real-time, including automating the response to
certain cyber incidents. This greatly enhances our incident response processes
by reducing response times and allowing our security analysts to focus on other
and more complex tasks.” ... Meanwhile, hackers and hacking groups have already
incorporated AI and large language modeling (LLM) capabilities to carry out
incredibly sophisticated attacks, such as next-generation phishing and social
engineering attacks using deep fakes. “The incorporation of voice impersonation
and personalized content through ‘deepfake’ attacks via AI-generated videos,
voices or images make these attacks particularly harder to detect and defend
against,” says Avakian.
Has the Cybersecurity Workforce Peaked?
Jobseekers are likely also running afoul of the trend in ghost-job posting.
Nearly half of hiring managers have admitted to keeping job postings open, even
when they are not looking to fill a specific position. That's being used as a
way to keep employees motivated, give the impression the company is growing, or
to placate overworked employees, according to a survey conducted by Clarify
Capital. These ghost jobs are a significant problem for cybersecurity job
seekers in particular, with one resume site estimating that 46% of listings for
a cybersecurity analyst in the United Kingdom were positions that would never be
filled--compared with about a third for all roles. ... Those economic pressures
are another reason that purported jobs are not materializing, says Jon Brandt,
director of professional practices and innovation at ISACA, an
information-technology certification organization. "People can respond to any
survey and say, hey, we have a need for 20 more people," he says. "But at the
end of the day, unless an organization is taking active steps to hire, then
that's not a data point we should be looking at right now." For entry-level
workers without significant experience, the picture is especially grim.
Cyberseek's career pathway data shows that demand for workers resembles a
reverse pyramid.
You Have an SBOM — What Are the Next Steps?
To maximize SBOM benefits, integrate them into your SDLC and automate the
process whenever possible. This ensures real-time updates, maintaining accuracy
as your software evolves. Regular updates reduce the risk of outdated data,
enhancing transparency and security. Automating SBOM creation by integrating
them into CI/CD pipelines ensures an SBOM with each build, providing a reliable
record of software components. By setting up quality gates in your CI/CD
workflows, you can scan SBOMs for security vulnerabilities and licensing issues,
stopping noncompliant components from moving forward in deployment. During
quality assurance (QA), SBOMs are vital for ensuring compliance and security
before release. They ensure each release meets industry standards and best
practices. By integrating SBOMs into CI/CD and QA processes, development teams
establish a robust framework for transparency and compliance, boosting software
supply chain security at all stages. ... Effective SBOM management extends
beyond the development phase. Once in production, SBOMs need to be continuously
monitored to ensure ongoing security and compliance, especially as new
vulnerabilities emerge.
AI-Powered Enterprise Architecture: A Strategic Imperative
AI can significantly enhance EA reusable knowledge repositories, architecture
diagrams, and visualizations by analyzing real-time and historical projects,
programs, solution designs, and other relevant data to identify anomalies,
bottlenecks, and optimization opportunities in designing robust technology
solutions. AI-powered solution design monitoring systems could detect a sudden
increase in website traffic and automatically scale server resources to handle
the increased load; such measures could have cost and end-user experience issues
that could potentially impact business. Technical experts can apply the insights
learned to ensure the future design of robust solutions by considering aspects
of application behavior that may not have been considered. AI can streamline the
architecture design process by generating multiple design options, simulating
different scenarios, and optimizing designs based on performance and cost. Using
generative design techniques, AI can create innovative and efficient solution
design patterns that would be difficult or non-pragmatic to achieve through
traditional methods. For example, an AI-powered design tool could generate
multiple network designs, each with different topologies and configurations, and
then evaluate the performance and cost of each design to identify the optimal
solution.
How enterprises can identify and control AI sprawl
AI sprawl refers to the uncontrolled proliferation of AI tools across an
organization. Just like with cybersecurity, there are too many tools solving
too many things without centralized oversight. This leads to inefficiencies,
redundancies, and significant security risks. For instance, various
departments, like sales and marketing, might independently adopt different AI
solutions for similar problems, but these solutions don’t integrate or align
with each other. This increases costs and operational inefficiencies. AI
sprawl also raises governance challenges, making it difficult to ensure data
quality, consistency, and security. ... CIOs are in a unique position because
they oversee multiple functions while CTOs tend to focus more on the
engineering side of the product. At Nutanix, we’re adopting a centralized AI
governance approach. We’ve established a cross-functional committee to take
inventory of all existing AI tools and develop a unified strategy. This
includes creating policies, frameworks, and best practices that align with the
company’s overall objectives. ... With AI tools spread across an organization
it’s difficult to ensure data quality and security. Each tool might store or
process data in different ways, potentially exposing sensitive information and
increasing the risk of compliance violations, such as GDPR breaches.
Strengthening OT Cybersecurity in the Age of Industry 4.0
Historically, OT systems were not considered significant threats due to their
perceived isolation from the Internet. Organizations relied on physical
security measures, such as door locks, passcodes, and badge readers, to
protect against hands-on access and disruption to physical operational
processes. However, the advent of the 4th Industrial Revolution, or Industry
4.0, has introduced smart technologies and advanced software to optimize
efficiency through automation and data analysis. This digital transformation
has interconnected OT and IT systems, creating new attack vectors for
adversaries to exploit and access sensitive data. ... First, security leaders
should isolate OT networks from IT networks and the Internet to limit the
attack surface and verify that the networks are segmented. This should be
monitored 24/7 to ensure network segmentation effectiveness and proper
functioning of security controls. This containment strategy helps prevent
lateral movement within the network during a breach. Real-time network
monitoring and the appropriate alert escalation (often notifying the plant
supervisor or controls engineer who are in the best position to verify if
access or a configuration change is appropriate and planned, not the IT SOC)
aids in the rapid detection and response to threats.
Tips for making sure your AI-powered FP&A efforts are successful
One of the biggest problems with AI is the issue of security. Many finance
teams hesitate to embrace AI solutions out of concerns that they could
undermine data privacy or weaken data security. Data security is important, as
handling large amounts of sensitive information requires robust protection
measures. These concerns are well-founded, too – last year, Samsung banned
employees from using third-party GenAI tools after ChatGPT leaked sensitive
data. International regulations are also catching up with AI and establishing
requirements around data privacy and security. It’s important to build clear
policies around data use, set up and regularly review access permissions, and
establish logging and monitoring to track unauthorised use or data access.
Consult international best practices for AI-related data privacy, because they
are likely to strongly inform evolving compliance regulations and put their
recommendations into practice. ... The best AI tools in the world won’t be
much use if your finance teams avoid actually using them. Many employees are
nervous that AI could take over their jobs and/or distrust the tech, which
leads them to ignore AI-powered insights. Using AI tools effectively also
requires digital literacy and technical skills that may be lacking among your
employees.
Mind the Gap: Migration Projects – Gaining Traction or Spinning Your Wheels
Think about your migration project like running a marathon in rented shoes. (I
know, I know. It’s not a photo-realistic example, but stick with me. You’ll
get the point.) You start out with some good shoes, but they’re very
expensive. Comfortable and well-fitting, but expensive. At, say, the 10-mile
marker you have the opportunity to swap out your shoes. The ones you have are
expensive and you don’t want to keep spending the money. Besides, you’re doing
fine. So, you stop, select a less expensive pair, and put them on. All the
while, the clock is ticking and you’re not making any progress toward the
finish line. You’re betting on the expectation that you’ll make up the lost
time by running the remainder of the race faster. The shoes are cheaper, but
they don’t fit as well, and after a few miles your feet begin to hurt. Your
pace slows considerably. You finish the race. Eventually. Far short of your
goal, blood soaking through your socks, and far slower than had you not
migrated. As you hobble back home with your disappointing result, you can
console yourself with the money you saved as you try to convince yourself that
it was worth it.
Quote for the day:
“Identify your problems but give your
power and energy to solutions.” -- Tony Robbins
No comments:
Post a Comment