Quote for the day:
"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown
Common Pitfalls and New Challenges in IT Automation

“You don’t know what you don’t know and can’t improve what you can’t see.
Without process visibility, automation efforts may lead to automating flawed
processes. In effect, accelerating problems while wasting both time and
resources and leading to diminished goodwill by skeptics,” says Kerry Brown,
transformation evangelist at Celonis, a process mining and process intelligence
provider. The aim of automating processes is to improve how the business
performs. That means drawing a direct line from the automation effort to a
well-defined ROI. ... Data is arguably the most boring issue on IT’s plate.
That’s because it requires a ton of effort to update, label, manage and store
massive amounts of data and the job is never quite done. It may be boring work,
but it is essential and can be fatal if left for later. “One of the most
significant mistakes CIOs make when approaching automation is underestimating
the importance of data quality. Automation tools are designed to process and
analyze data at scale, but they rely entirely on the quality of the input data,”
says Shuai Guan, co-founder and CEO at Thunderbit, an AI web scraper tool. ...
"CIOs often fall into the trap of thinking automation is just about suppressing
noise and reducing ticket volumes. While that’s one fairly common use case,
automation can offer much more value when done strategically,” says Erik Gaston
Outmaneuvering Tariffs: Navigating Disruption with Data-Driven Resilience

The fact that tariffs are coming was expected – President Donald Trump
campaigned promising tariffs – but few could have expected their severity (145%
on Chinese imports, as of this writing) and their pace of change (prohibitively
high “reciprocal” tariffs on 100+ countries, only to be temporarily rescinded
days later). Also unpredictable were second-order effects such as stock and bond
market reactions, affecting the cost of capital, and the impact on consumer
demand, due to the changing expectations of inflation or concerns of job loss.
... Most organizations will have fragmented views of data, including views of
all of the components that come from a given supplier or are delivered through a
specific transportation provider. They may have a product-centric view that
includes all suppliers that contribute all of the components of a given product.
But this data often resides in a variety of supplier-management apps,
procurement apps, demand forecasting apps, and other types of apps. Some may be
consolidated into a data lakehouse or a cloud data warehouse to enable advanced
analytics, but the time required by a data engineering team to build the
necessary data pipelines from these systems is often multiple days or weeks, and
such pipelines will usually only be implemented for scenarios that the business
expects will be stable over time.
The state of intrusions: Stolen credentials and perimeter exploits on the rise, as phishing wanes

What’s worrying is that in over half of intrusions (57%) the victim
organizations learned about the compromise of their networks and systems from a
third-party rather than discovering them through internal means. In 14% of
cases, organizations were notified directly by attackers, usually in the form of
ransom notes, but 43% of cases involved external entities such as a
cybersecurity company or law enforcement agencies. The average time attackers
spent inside a network until being discovered last year was 11 days, a one-day
increase over 2023, though still a major improvement versus a decade ago when
the average discovery time was 205 days. Attacker dwell time, as Mandiant calls
it, has steadily decreased over the years, which is a good sign ... In terms of
ransomware, the most common infection vector observed by Mandiant last year were
brute-force attacks (26%), such as password spraying and use of common default
credentials, followed by stolen credentials and exploits (21% each), prior
compromises resulting in sold access (15%), and third-party compromises (10%).
Cloud accounts and assets were compromised through phishing (39%), stolen
credentials (35%), SIM swapping (6%), and voice phishing (6%). Over two-thirds
of cloud compromises resulted in data theft and 38% were financially motivated
with data extortion, business email compromise, ransomware, and cryptocurrency
fraud being leading goals.
Three Ways AI Can Weaken Your Cybersecurity

“Slopsquatting” is a fresh AI take on “typosquatting,” where ne’er-do-wells
spread malware to unsuspecting Web travelers who happen to mistype a URL. With
slopsquatting, the bad guys are spreading malware through software development
libraries that have been hallucinated by GenAI. ... While it is still unclear
whether the bad guys have weaponized slopsquatting yet, GenAI’s tendency to
hallucinate software libraries is perfectly clear. Last month, researchers
published a paper that concluded that GenAI recommends Python and JavaScript
libraries that don’t exist about one-fifth of the time. ... Like the SQL
injection attacks that plagued early Web 2.0 warriors who didn’t adequately
validate database input fields, prompt injections involve the surreptitious
injection of a malicious prompt into a GenAI-enabled application to achieve some
goal, ranging from information disclosure and code execution rights. Mitigating
these sorts of attacks is difficult because of the nature of GenAI applications.
Instead of inspecting code for malicious entities, organizations must
investigate the entirery of a model, including all of its weights. ... A form of
adversarial AI attacks, data poisoning or data manipulation poses a serious risk
to organizations that rely on AI. According to the security firm CrowdStrike,
data poisoning is a risk to healthcare, finance, automotive, and HR use cases,
and can even potentially be used to create backdoors.
AI Has Moved From Experimentation to Execution in Enterprise IT

According to the SOAS report, 94% of organisations are deploying applications
across multiple environments—including public clouds, private clouds,
on-premises data centers, edge computing, and colocation facilities—to meet
varied scalability, cost, and compliance requirements. Consequently, most
decision-makers see hybrid environments as critical to their operational
flexibility. 91% cited adaptability to fluctuating business needs as the top
benefit of adopting multiple clouds, followed by improved app resiliency (68%)
and cost efficiencies (59%). A hybrid approach is also reflected in deployment
strategies for AI workloads, with 51% planning to use models across both cloud
and on-premises environments for the foreseeable future. Significantly, 79% of
organisations recently repatriated at least one application from the public
cloud back to an on-premises or co-location environment, citing cost control,
security concerns, and predictability. ... “While spreading applications across
different environments and cloud providers can bring challenges, the benefits of
being cloud-agnostic are too great to ignore. It has never been clearer that the
hybrid approach to app deployment is here to stay,” said Cindy Borovick,
Director of Market and Competitive Intelligence,
Trying to Scale With a Small Team? Here's How to Drive Growth Without Draining Your Resources

To be an effective entrepreneur or leader, communication is key, and being able
to prioritize initiatives that directly align with the overall strategic vision
ensures that your lean team is working on projects that have the greatest
impact. Integrate key frameworks such as Responsible, Accountable, Consulted,
and Informed (RACI) and Objectives and Key Results (OKRs) to maintain
transparency, focus and measure progress. By focusing efforts on high-impact
activities, your lean team can achieve high success and significant results
without the unnecessary strain usually attributable to early-stage
organizations. ... Many think that agile methodologies are only for the
fast-moving software development industry — but in reality, the frameworks are
powerful tools for lean teams in any industry. Encouraging the right culture is
key where quick pivots, regular genuine feedback loops and leadership that
promotes continuous improvement are part of the everyday workflows. This agile
mindset, when adopted early, helps teams rapidly respond to market changes and
client issues. ... Trusting others builds rapport. Assigning clear ownership of
tasks while allowing those team members the autonomy to execute the strategies
creatively and efficiently, while also allowing them to fail, is how trust is
created.
Effecting Culture Changes in Product Teams

Depending on the organization, the responsibility of successfully leading a
culture shift among the product team could fall to various individuals – the
CPO, VP of product development, product manager, etc. But regardless of the
specific title, to be an effective leader, you can’t assume you know all the
answers. Start by having one-to-one conversations with numerous members on the
product/engineering team. Ask for their input and understand, from their
perspective, what is working, what’s not working, and what ideas they have for
how to accelerate product release timelines. After conducting one-to-one
discussions, sit down and correlate the information. Where are the common
denominators? Did multiple team members make the same suggestions? Identify the
roadblocks that are slowing down the product team or standing in the way of
delivering incremental value on a more regular basis. In many cases, tech
leaders will find that their team already knows how to fix the issue – they just
need permission to do things a bit differently and adjust company
policies/procedures to better support a more accelerated timeline. Talking
one-on-one with team members also helps resolve any misunderstandings around why
the pace of work must change as the company scales and accumulates more
customers. Product engineers often have a clear vision of what the end product
should entail, and they want to be able to deliver on that vision.
Microsoft Confirms Password Spraying Attack — What You Need To Know

The password spraying attack exploited a command line interface tool called
AzureChecker to “download AES-encrypted data that when decrypted reveals the
list of password spray targets,” the report said. It then, to add salt to the
now open wound, accepted an accounts.txt file containing username and password
combinations used for the attack, as input. “The threat actor then used the
information from both files and posted the credentials to the target tenants for
validation,” Microsoft explained. The successful attack enabled the Storm-1977
hackers to then leverage a guest account in order to create a compromised
subscription resource group and, ultimately, more than 200 containers that were
used for cryptomining. ... Passwords are no longer enough to keep us safe
online. That’s the view of Chris Burton, head of professional services at
Pentest People, who told me that “where possible, we should be using passkeys,
they’re far more secure, even if adoption is still patchy.” Lorri
Janssen-Anessi, director of external cyber assessments at BlueVoyant is no less
adamant when it comes to going passwordless. ... And Brian Pontarelli, CEO
of FusionAuth, said that the teams who are building the future of passwords are
the same ones that are building and managing the login pages of their apps.
“Some of them are getting rid of passwords entirely,” Pontarelli said
The secret weapon for transformation? Treating it like a merger

Like an IMO, a transformation office serves as the conductor — setting the
tempo, aligning initiatives and resolving portfolio-level tensions before they
turn into performance issues. It defines the “music” everyone should be playing:
a unified vision for experience, business architecture, technology design and
most importantly, change management. It also builds connective tissue. It
doesn’t just write the blueprint — it stays close to initiative or project leads
to ensure adherence, adapts when necessary and surfaces interdependencies that
might otherwise go unnoticed. ... What makes the transformation office truly
effective isn’t just the caliber of its domain leaders — it’s the steering
committee of cross-functional VPs from core business units and corporate
functions that provides strategic direction and enterprise-wide accountability.
This group sets the course, breaks ties and ensures that transformation efforts
reflect shared priorities rather than siloed agendas. Together, they co-develop
and maintain a multi-year roadmap that articulates what capabilities the
enterprise needs, when and in what sequence. Crucially, they’re empowered to
make decisions that span the legacy seams of the organization — the gray areas
where most transformations falter. In this way, the transformation office
becomes more than connective tissue; it becomes an engine for enterprise
decision-making.
Legacy Modernization: Architecting Real-Time Systems Around a Mainframe
/filters:no_upscale()/articles/architecting-real-time-systems-around-mainframe/en/resources/65figure-3-1745313427634.jpg)
When traffic spikes hit our web portal, those requests would flow through to the
mainframe. Unlike cloud systems, mainframes can't elastically scale to handle
sudden load increases. This created a bottleneck that could overload the
mainframe, causing connection timeouts. As timeouts increased, the mainframe
would crash, leading to complete service outages with a large blast radius,
hundreds of other applications which depend on the mainframe would also be
impacted. This is a perfect example of the problems with synchronous connections
to the mainframes. When the mainframes could be overwhelmed by a highly elastic
resource like the web, the result could be failure in datastores, and sometimes
that failure could result in all consuming applications failing. ... Change Data
Capture became the foundation of our new architecture. Instead of batch ETLs
running a few times daily, CDC streamed data changes from the mainframes in near
real-time. This created what we called a "system-of-reference" - not the
authoritative source of truth (the mainframe remains "system-of-record"), but a
continuously updated reflection of it. The system of reference is not a proxy of
the system of record, which is why our website was still live when the mainframe
went down.
No comments:
Post a Comment