The tight connection between data governance and observability
Although data governance helps in establishing the right set of data management
policies and procedures, current data stacks are growing beyond boundaries. With
data sets now scaling with more data sources, more tables, and more complexity,
there is a pressing need to maintain a constant pulse on the health of these
systems. Since any amount of downtime can lead to partial, erroneous, missing,
or otherwise inaccurate data, organizations need to do better than just
implementing a handful of policies. Data observability enables organizations to
cater to these increasingly complex data systems and support an endless
ecosystem of data sources and formats. By providing a real-time view of the
health and state of data across the enterprise, it empowers them to identify and
resolve issues and go far beyond just describing the problem. Observability
provides much-needed context to the issue, paving the way for a quick resolution
while also ensuring it doesn’t transpire again.
Data Ethics: New Frontiers in Data Governance
Navigating that crucial difference is rarely cut and dried even in simple,
day-to-day personal interactions. Still, within the world of data, ethical
questions can quickly take on multiple dimensions and present challenges unique
to the field. Assessing data ethics can be decidedly confusing, for as Lopez
pointed out, “Not all things that are bad for data are actually bad for the
world … and vice versa.” Whereas the ethical actions and judgments that we make
as private individuals tend to play out within a limited set of factors, the
implications of even the most innocuous events within large-scale Data
Management can be huge. Company data exists in “space,” potentially flowing
between departments and projects, but privacy agreements and other safeguards
that apply for some purposes may not apply for others. Data from spreadsheets
authored for in-house analytics, for example, might violate a client privacy
agreement if it migrates to open cloud storage.
Distributed ledger technology and the future of insurance
The rise of crypto itself also opens up new and lucrative opportunities for
insurers. Not only are we seeing an upward trajectory in consumer adoption of
crypto (which jumped worldwide by over 800% in 2021 alone), but there is also
significant momentum among institutional investors such as hedge funds and
pension funds. This is in part due to recent regulatory and legal clarifications
(you can read my reflections on the recent MiCA regulation here), but also the
unabated enthusiasm of end investors for this new asset class. Another key
accelerator is the growing acceptance of ‘proof of stake’ (in opposition to
‘proof of work’) as the primary consensus mechanism to validate transactions on
the blockchain. Critically, proof of stake is far less energy-intensive than its
counterpart (by about 99%), and overcomes critical limits on network capacity
needed to drive institutional adoption. Ethereum’s transition from proof of work
to proof of stake in September of this year was a watershed moment for the
industry.As a result, banks are looking to meet institutional demand by
launching their own crypto custody solutions.
Digital transformation: 3 pieces of contrarian advice
The contrarian advice, which is now starting to enter the mainstream, is that
it’s time to fully embrace the hybrid cloud. Companies are learning that while
public cloud still has many benefits, the cost over time adds up. As an
organization grows, there are usually opportunities to do at least some of the
workload in the private cloud to gain benefits in locality, data transfer, and
flexibility of in-house customizations. Other considerations of the private
cloud include various compliance, privacy, and security advantages. The hybrid
cloud can offer “best of both worlds” benefits, such as edge computing and more
effective paths to advanced technologies such as artificial intelligence and
machine learning (AI/ML). The “best of both worlds” effect comes into play when
taking advantage of APIs and solutions that are open source and based on open
standards. One example of this is running a workload that is simple to run in
your private cloud and only has speedup when run on specialized hardware such as
a supercomputer, quantum computer, or AI/ML (or other workload-specific)
hardware.
EU Cyber Resilience Act
The CRA applies to hardware and software that contain digital components and
whose intended use includes a connection to a device or network and applies to
all digital products placed on the EU market (including imported products). ...
Manufacturers will need to assess the cyber risk of their digital hardware and
software and take continued action to fix problems during the lifetime of the
product. In addition, before placing any digital product on the market,
manufacturers will be required to conduct a formal ‘conformity assessment’ of
such product and implement appropriate policies and procedures documenting
relevant cybersecurity aspects of the products. Companies will have to
notify the EU cybersecurity agency (ENISA) of any exploited vulnerability within
the product, and any incident impacting product security, within 24 hours of
becoming aware. Manufacturers will also be required to notify users of any
incident impacting product security without delay.
How DevOps Helps With Secure Deployments
The goal of DevSecOps is to provide security best practices in a way that
doesn’t disrupt team productivity. Secure development is the key to a smooth
deployment process. It’s frustrating when you have security in your product but
don’t see it being implemented or taken seriously. DevSecOps brings back that
focus by reminding everyone just how important and necessary good hygiene
practices are for both developers and operations. For security to be more
effective and reliable, it should be incorporated from the very beginning. This
means that instead of waiting until there is an issue or crisis before
implementing measures for protection such as firewalls, encryption keys, etc.;
you want your developers working on it up front so they can ensure everything
will work well together later. It helps ensure security issues are found as
early in the process as possible so it is close to decision-makers. It’s much
easier (and less painful) when security issues can be fixed while you still
remember what happened with your project—it’s like having an extra set of eyes
on the project.
Why enterprise architecture and IT are distinct
The reality is that enterprise architecture too often functions as a specialism within IT. However, one way to draw the distinction is by thinking about the differences between IT and enterprise architecture in terms of information flow. The ability to store and share ideas and information has, and always will be, at the heart of business actions – and it’s something which we now deeply associate with IT. All of the technological infrastructure, however, means nothing without the context in which it operates. Focusing purely on IT forgets the importance of users, as well as the employees and customers who generate, share and utilize the information. Information can’t be useful if it exists in a vacuum – it needs to connect and permeate a business in a dynamic way that’s bespoke to its ecosystem, people and situation. Enterprise architecture’s role is to enable information flow beyond just establishing the infrastructure. In considering and responding to the way in which systems interact with users and business processes, enterprise architecture is aligned to the long-term initiatives and stakeholders, as opposed to just the deployment of technology and tools alone.
How to connect hybrid knowledge workers to corporate culture
Managers must be able to articulate what the company culture is and translate
company culture to daily team life, Gartner said. However, the Gartner survey of
knowledge workers revealed that less than half of managers can effectively
communicate why the broader organisational culture is important. “Teams and
managers are the best mechanism for creating culture connectedness by enabling
each team to create their own micro-culture while still supporting the
organisation,” said Steele. “Organisations can double employee culture
connectedness by embracing micro-cultures.” To help connect hybrid knowledge
workers to company culture, managers should gauge employees’ understanding of
the broader organisational values and their team’s specific norms and processes,
she advised. Managers can then work together with their teams to translate what
each value means in the context of their work, said Steele, adding that they can
then create a list of behaviours that contribute to the culture and those that
will derail it.
How Data Privacy Regulations Can Affect Your Business
An obvious impact of data regulations is that they reduce the amount of data a
business can collect. Businesses collect and store data to help develop and
improve their company, establishing a better understanding of their customer
base and target audience. Unfortunately, the risk of storing large quantities of
data can pose a significant risk in terms of cybercrime, requiring considerable
resources to help protect IT systems. As a result, some businesses are choosing
only to collect data that is critical to their operations, limiting the chances
of a costly data breach. ... The risk management and compliance of businesses
and any third parties involved are very important in the modern business
climate. New regulations include many contractual safeguarding procedures,
strict data protection, and evidence that compliance has been achieved. ...
There have also been new data roles created within businesses in recent years,
including those of internal privacy managers, chief data officers (CDOs),
privacy executives, data protection officers, and data scientists.
Accelerating SQL Queries on a Modern Real-Time Database
Modern databases have a cluster architecture spanning multiple nodes for scale,
performance, and reliability. High density of fast storage is achieved through
solid-state disks (SSDs). Hybrid memory architecture (HMA) stores indexes and
data in dynamic random-access memory (DRAM), SSD, and other devices to provide
cost-effective fast storage capacity. ... Disk (SSD) reads and writes are
optimized for latency and throughput. Indexes play a key role in realizing fast
access to data. This requires supporting secondary indexes on integer, string,
geospatial, map, and list columns. ... The thread architecture on a cluster node
is optimized to exploit parallelism of multicore processors of modern hardware,
and also to minimize conflict and maximize throughput. The data is distributed
uniformly across all nodes to maximize parallelism and throughput. The client
library connects directly to individual cluster nodes and processes a request in
a single hop, by distributing the request to nodes where it is processed in
parallel, and assembling the results.
Quote for the day:
"You must stand firm if you wish to lead
the firm" -- Constance Chuks Friday
No comments:
Post a Comment