Is it Possible to Calculate Technology Debt?
Perhaps we should rename it Architectural Debt or even Organisational Debt? From
an Enterprise Architecture standpoint, we talk about “People, Processes, and
Technology,” all of which contribute to the debt over time and form a more
holistic view of the real debt. It does not matter what it is called as long as
there is consistency within the organisation and it has been defined, agreed and
communicated. ... The absence of master data management, quality, data lineage,
and data validation all contribute to data debt. People debt is caused by having
to support out-of-date assets (software and/or infrastructure), the resulting
deskilling over time and missed opportunity to reskill which all potentially
leads to employee attrition. Processes requiring modification can become
dependent on technology due to the high cost of change, or the alternative of
adjusting the design to accommodate poorly designed processes. While Robotic
Process Automation (RPA) can provide a rapid solution in such cases, it raises
the question of whether the automation simply perpetuates flawed processes
without addressing the underlying issue.
There Are Four Types of Data Observability. Which One is Right for You?
Business KPI Drifts: Since data observability tools monitor the data itself,
they are often used to track business KPIs just as much as they track data
quality drifts. For example, they can monitor the range of transaction amounts
and notify where spikes or unusual values are detected. This autopilot system
will show outliers in bad data and help increase trust in good data. Data
Quality Rule Building: Data observability tools have automated pattern
detection, advanced profiling, and time series capabilities and, therefore, can
be used to discover and investigate quality issues in historical data to help
build and shape the rules that should govern the data going forward.
Observability for a Hybrid Data Ecosystem: Today, data stacks consist of data
lakes, warehouses, streaming sources, structured, semi-structured, and
unstructured data, API calls, and much more. ... Unlike metadata monitoring
that is limited to sources with sufficient metadata and system logs – a property
that streaming data or APIs don’t offer – data observability cuts through to the
data itself and does not rely on these utilities.
Why Companies Should Consider Developing A Chief Security Officer Position
The combination of the top-down and cross-functional influence of the CSO with
the technical reach of the CISO should be key to creating and maintaining the
momentum required to deliver change and break business resistance where it
happens. In my experience, firms looking to implement this type of CSO
position should start looking internally for the right executive: Ultimately
the role is all about trust, and your candidate should have intimate knowledge
of how to navigate the internal workings of the organization. I would
recommend looking for someone that is an ambitious leader—not someone at an
end-of-career position. Additionally, consider assigning this role to a
seasoned executive. Someone you believe is motivated overall by the protection
of the business from active threats, able to take an elevated long-term view
where required, over and above the short-term fluctuations of any business.
Demonstrating leadership in a field as complex should be seen as an
opportunity to showcase skills that can be applied elsewhere in the
organization.
Threatening botnets can be created with little code experience, Akamai finds
According to the research the Dark Frost actor is selling the tool as
DDoS-for-hire exploit and as a spamming tool. “This is not the first exploit
by this actor,” said West, who noted that the attacker favors Discord to
openly tout their wares and brag. “He was taking orders there, and even
posting screenshots of their bank account, which may or may not be
legitimate.” ... The Dark Frost botnet uses code from the infamous Mirai
botnet, which West said was easy to obtain, and highly effective in exploiting
hundreds of machines, and is therefore emblematic of how, with source code
from previously successful malware strains and AI code generation, someone
with minimal knowledge can launch botnets and malware. “The author of Mirai
put out the source code for everyone to see, and I think that it started and
encouraged the trend of other malware authors doing the same, or of security
researchers publishing source code to get a bit of credibility,” said West.
Experts say stopping AI is not possible — or desirable
"These systems are not imputed with the capability to do all the things that
they're now able to do. We didn’t program GPT-4 to write computer programs but
it can do that, particularly when it’s combined with other capabilities like
code interpreter and other programs and plugins. That’s exciting and a little
daunting. We’re trying to get our hands wrapped around risk profiles of these
systems. The risk profiles, which are evolving literally on a daily basis.
“That doesn't mean it's all net risk. There are net benefits as well,
including in the safety space. I think [AI safety research company] Anthropic
is a really interesting example of that, where they are doing some really
interesting safety testing work where they are asking a model to be less
biased and at a certain size they found it will literally produce output that
is less biased simply by asking it. So, I think we need to look at how we can
leverage some of those emerging capabilities to manage the risk of these
systems themselves as well as the risk of what’s net new from these emerging
capabilities.”
How IT can balance local needs and global efficiency in a multipolar world
Technical architecture solutions, such as microservices, can help companies
balance the level of local solution tailoring with the need to harness scale
efficiencies. While not new, these solutions are more widely accepted and can
be more easily realized in modern cloud platforms. These developments are
enabling leading companies to evolve their operating models by building
standardized, modular, and configurable solutions that maximize business
flexibility and efficiency while making data management more transparent ...
However useful these localization capabilities are, they will not work as
needed unless local teams have sufficient autonomy (at some companies, local
teams in China, for example, clear decisions through central headquarters,
which is a major roadblock for pace and innovation). The best companies
provide local teams with specific decision rights within guidelines and
support them by providing necessary capabilities, such as IT talent embedded
with local market teams to get customer feedback early.
Constructing the innovation mandate
We need to understand successful innovation actually touches all aspects of a
business, by contributing to improving business processes, identifying new,
often imaginative, ways to reduce costs, building out existing business models
into new directions and value and discovering new ways and positioning into
markets. To get to a consistent performance of innovation and creativity
within organizations you do need to rely on a process, structure and the
consistent ability to foster a culture of innovation. An innovation mandate is
a critical tool for defining the scope and direction of innovation and the
underlying values, commitment and resources placed behind it. Normally this
innovation mandate comes in the form of a document, generally build up by a
small team of senior leaders, innovation experts and subject matter experts.
That group should possess a deep understanding of the existing organization’s
strategy, business models, operations and culture and a wider appreciation of
the innovation landscape, the “fields of opportunity” and the emerging
practices of innovation management.
3 Unexpected Technology Duos That Will Supercharge Your Marketing
While geofencing isn't the newest technology to enter the marketing
spectrum, it is improving exponentially day by day. Geofencing creates
virtual geographic boundaries around targeted areas, and when someone
crosses into one of those areas, it creates a triggered response — your ads
will show up while they're browsing their favorite sites or checking their
email. ... Website content can be a major trust builder for your businesses
and therefore can play a vital part in turning an interested prospect into a
buying customer. But many a business owner has cringed at the thought of
writing copy for their website ... let alone regularly updating it with blog
posts or e-newsletter articles. Creating large amounts of content can be a
constant challenge for business owners, and I get it. You're already busy
running a business! But what I want small business owners to realize is that
they have access to many tools — some of them free — that will do 95% of the
writing for you.
The Evolution of the Chief Privacy Officer
Given the natural overlap between privacy, security and the uses of data,
strategic cooperation is key. “It’s about building a strategy together to
develop an enterprise approach,” Jones said. “My role is to build privacy
and transparency into every state system and application and business
process at every stage of the life cycle.” Cotterill looks to Indiana’s IT
org chart to help define the spheres of responsibility. The governor
appoints the chief information officer and chief data officer, and the CISO
and CPO report to each of them, respectively. “The CIO, and the CISO
reporting to him, they’re focused on providing cost-effective, secure,
consistent, reliable enterprise IT services and products,” he said. “For the
CDO, with the CPO reporting to him … we have a threefold mission: to empower
innovation, enable the use of open data, and do that all while maintaining
data privacy.” IT provides “that secure foundation to do business,” while he
and the CDO “are focused on the substantive use of data to drive decisions
and improve outcomes,” he said.
Should Data Engineers be Domain Competent?
A traditional data engineer views a table with one million records as
relational rows that must be crunched, transported and loaded to a different
destination. In contrast, an application programmer approaches the same
table as a set of member information or pending claims that impact life. The
former is a pureplay, technical view, while the latter is more
human-centric. These drastically differing lenses form the genesis of the
data siloes ... When we advocate domain knowledge, let’s not relegate it to
a few business analysts who are tasked to translate a set of high-level
requirements into user stories. Rather domain knowledge implies that every
data engineer gets a grip on the intrinsic understanding of how
functionality flows and what it tries to accomplish. Of course, this is
easier to preach than practice, as expecting a data team to understand
thousands of tables and millions of rows is akin to expecting them to
navigate a freeway in peak time on the reverse gear with blindfolds. It will
be a disastrous. While its amply evident that data teams need domain
knowledge, it’s hard to expect that centralized data teams will deliver
efficient results.
Quote for the day:
"Leaders are visionaries with a
poorly developed sense of fear and no concept of the odds against them. "
-- Robert Jarvik
No comments:
Post a Comment