Implementing an Effective Data Strategy
What challenges must a data strategy overcome: Creating a data culture? Building
the data business case? Or fixing data issues? Martin Davis, CIO of Southern
Company, said, “It is all of those, but it starts with data ownership. Once you
have the right business ownership, you can work on the culture, the business
case, and other things.” Jim Russell, CIO for Manhattanville College, claimed
that with ownership established, “What most organizations are lacking are
foundational skills in the workforce. As competing knowledge requirements have
intensified, fewer employees seem to have data literacy or data fluency. For
this reason, I’ve been pushing data literacy as a foundational requirement with
expertise resulting in data fluency which means different things in different
campus communities. ... Obviously, a smart data strategy comes from business and
digital strategy. For this reason, Russell said, “It is important to start with
a common vision that spans data products and services. With this, CIOs should
help teams define vision and create clear scaffolding that overarching vision...”
The evolution of deception tactics from traditional to cyber warfare
There are many concerns when determining the next steps in responding to a cyber
incident or attack that require careful navigation of ethics, further
underscoring the importance of international governance and regulations. An
escalatory response to a cyberattack, such as a “hack back” or “attack back,”
raises legal and ethical questions if such action could lead to a larger
conflict. Because cyber attackers are becoming more skilled at hiding their true
identities, there is indeed cause for concern about whether a response could
lead to retaliatory actions and collateral damage against innocent parties.
Additionally, the intentions of the original attacker could be misidentified by
the victim, leading to disproportionate or unneeded attacks. ... This
necessitates a cyber defense strategy that doesn’t just block or react, but one
that is also designed to seek out attackers’ motives and identities. It’s a tale
as old as time in the military world—if you understand your opponent’s motives,
you have the upper hand.
Developers and the AI Job Wars: Here's How Developers Win
“Software development is less about writing software and more about
understanding the problem you are trying to solve,” says Louis Lang, CTO and
co-founder of Phylum. “While the likes of ChatGPT and Copilot might make the
writing process quicker, it has a long way to go before it can reason through a
novel problem domain. Making development faster with AI only applies to
scaffolding new projects and writing well-trodden code and even this seems
problematic from time to time. If you try to produce something that requires
deep expertise, AI will not help you.” But what jobs it does destroy, it
replaces with new roles. For example, AI is itself software and as such requires
developers. “With the rise of generative AI, software developers play a pivotal
role in designing, building, and maintaining the underlying infrastructure that
powers AI applications,” says Adam Prout, CTO of SingleStore, a cloud-native
database. “Their position is vital to implementing algorithms, creating data
pipelines, and optimizing models in close collaboration with data scientists and
machine learning engineers. The expertise of a software developer is integral to
bringing AI projects from conceptualization to real-world deployment.”
It’s time for cloud tech to meet operational tech at industrial sites
Industrial sites’ challenges can be daunting, but advances in cloud
computing—particularly in security and edge computing—have come a long way. Some
industrial sites are already adopting standards in site data collection, such as
OPC Unified Architecture (OPC UA), a machine-to-machine communication protocol
that allows control systems to exchange data securely and consistently. ... Edge
computing can store a subset of data at a site, and in some cases can even
provide cloud compute capabilities, thus allowing sites to continue to use cloud
capabilities even if network connectivity is lost. Of course, the corresponding
edge computing architectures—the amount of computing needed to store and process
data before sending it to the cloud—will vary based on the size of the
connectivity gap, the amount of data to be transferred, and the use of digital
assets, such as sensors and recording devices. Edge computing also manages
data’s return trip from the cloud to sites, making cloud-dependent, on-site
applications faster and more reliable, since it reduces reliance on network
connectivity.
Opportunities and Limitations of Deploying Large Language Models in the Enterprise
The progress we’ve seen in the last few months is nothing short of impressive.
While natural language understanding and processing is not net-new, it’s now
much more accessible. Not to mention that models have gone from 0 to 60 in terms
of depth and capabilities. But, for many CIOs, the value may not be immediately
obvious. Many organizations have been slashing budgets in the last year and
making blind investments is not in their agenda. ... Large Language Models
(LLMs) like GPT-4 are based on neural networks, which are inherently
probabilistic in nature. This means that given the same input, they might
produce slightly different outputs each time due to the randomness in the
model’s architecture or during the training process. This is what we mean when
we say LLMs are “non-deterministic.” ... Despite these challenges, there are
ways to manage the non-deterministic nature of LLMs, such as using ensemble
methods, applying post-processing rules or setting a seed for the randomness to
get repeatable results.
How to get internal employee poaching right
Even if your company has an open culture, it’s critical to develop cooperative
relationships with managers in other departments because losing a top
performer isn’t easy for anyone. Nevertheless, if a user department manager
recognizes an employee’s interest in transfering to IT, and you have a strong
working relationship with that manager, internal hiring can go a lot more
smoothly. ... At some companies, poaching an employee from another departments
is considered unethical and underhanded. Regardless, internal employee
poachingcan certainly be an issue if you actively recruit another department’s
employee without letting the other department manager know. It is vital to
know up front the actions and behaviors that are acceptable within your
company before you start recruiting another department’s employee. For
instance, in some cases, it is acceptable for an employee to be “loaned out”
from one department to another for the duration of a specific one-off project.
Such a policy helps provide temporary resources for projects while enabling
employees on loan to gain knowledge and cross-train in another
discipline.
The Never-Ending Battle: Routine Patching vs. Operational Stability
There’s an on-going battle between competing priorities being waged every day
in enterprises globally, and it’s been going on for decades. Cyber security
teams are concerned with unpatched vulnerabilities and the breaches they risk,
while IT professionals are driven by operational availability, the lack of
which jeopardizes the business’ ability to operate. In today’s world,
operational stability is winning, to the delight of threat actors everywhere.
... Existing vulnerability management strategies and tools focus only on the
prioritization of risk. While that helps organizations identify which
vulnerabilities they should attend to first, the actual orchestration of
remediation is often ignored entirely. Remediation efforts must be handled
discretely by different tools, processes, and teams, often with little to no
continuity between them. Further, tangibly demonstrating the efficacy and
progress of a vulnerability and patch management program is a massive
undertaking. With each new vulnerability and patch, individual teams tackle
discovery, correlation, and remediation in a one-off fashion, compounding
existing inter-team frustration and increasingly blurring the distinction of
success.
DeepMind Co-founder: The Next Stage of Gen AI Is a Personal AI
Suleyman said Pi will do away with the popular internet model of offering a
product, like Google Search or Facebook, free for users and instead rely on
advertising to pay the bills. The ad-based approach does not align the
interest of the tech platform with the user. “Really, the customer for
Facebook and Google and the other big companies is the advertiser,” he said.
“It’s not the user.” Pi will be different since it does not disseminate its
APIs, which is needed for commercial uses. Instead, “you as the consumer are
the only person that pays for the AI,” Suleyman said. As for the known risks
of generative AI including toxic content, hallucinations and bias, Suleyman
said Pi was built to avoid toxic subjects. He claims that “none of the prompt
hacks work against us.” Prompt hacks, which includes asking the AI to pretend
to be another persona, are geared to get around safeguards in disclosing
dangerous or toxic responses. For hallucinations, Suleyman said Pi has access
to real time information but admits this remains a challenge.
What are the cyber risks from the latest Middle Eastern conflict?
One substantial difference observed between the two conflicts is a lack of
cyber activity before the initial Hamas attack. Prior to Russia’s invasion of
Ukraine, which had been signalled months in advance by the Russian government,
Ukraine was bombarded with a widespread campaign of cyber intrusions designed
to soften up critical targets in advance. This was not the case in the Gaza
war, and this is not much of a surprise, because out of necessity, Hamas spent
months – maybe years – plotting its initial attack with exceptional attention
paid to operational security (OpSec). Indeed, it has been suggested that some
senior members of Hamas were kept in the dark entirely, in case they were
compromised by Israeli intelligence. Therefore, for the incursion to take
Israel by complete surprise, it may have been necessary for pro-Palestinian
groups and Hamas-affiliated actors to confine their activity to normal levels.
According to SecurityScorecard’s intel team, this was almost certainly the
case.
Microsoft Playwright Testing: Scalable End-to-End Testing for Modern Web Apps
With the playwright/test runner, tests run in independent, parallel worker
processes, with each process starting its own browser. Moreover, increasing
the number of parallel workers can reduce the time it takes to complete the
full test suite. However, when running tests locally or in a continuous
integration (CI) pipeline, there is a limitation to the number of central
processing unit (CPU) cores on a local machine or CI agent machine. ... With
Microsoft Playwright Testing, developers can use the scalable parallelism
provided by the service to run web app tests simultaneously across all modern
rendering engines such as Chromium, WebKit, and Firefox on Windows and Linux
and mobile emulation of Google Chrome for Android and Mobile Safari. In
addition, the service-managed browsers ensure consistent and reliable results
for functional and visual regression testing, whether tests run from a CI
pipeline or development machine.
Quote for the day:
"One advantage of talking to yourself
is that you know at least somebody's listening." --
Franklin P. Jones
No comments:
Post a Comment