The open source licensing war is over
Too many open source warriors think that the license is the end, rather than
just a means to grant largely unfettered access to the code. They continue to
fret about licensing when developers mostly care about use, just as they always
have. Keep in mind that more than anything else, open source expands access to
quality software without involving the purchasing or (usually) legal teams. This
is very similar to what cloud did for hardware. The point was never the license.
It was always about access. Back when I worked at AWS, we surveyed developers to
ask what they most valued in open source leadership. You might think that
contributing code to well-known open source projects would rank first, but it
didn’t. Not even second or third. Instead, the No. 1 criterion developers used
to judge a cloud provider’s open source leadership was that it “makes it easy to
deploy my preferred open source software in the cloud.” ... One of the things we
did well at AWS was to work with product teams to help them discover their
self-interest in contributing to the projects upon which they were building
cloud services, such as Elasticache.
Navigate Serverless Databases: A Guide to the Right Solution
One of the core features of Serverless is the pay-as-you-go pricing. Almost all
Serverless databases attempt to address a common challenge: how to provision
resources economically and efficiently under uncertain workloads. Prioritizing
lower costs may mean consuming fewer resources. However, in the event of
unexpected spikes in business demand, you may have to compromise user experience
and system stability. On the other hand, more generous and secure resource
provisioning leads to resource waste and higher costs. Striking a balance
between these two styles requires complex and meticulous engineering management.
This would divert your focus from the core business. Furthermore, the
Pay-as-you-go billing model has varying implementations in different Serverless
products. Most Serverless products offer granular billing based on storage
capacity and read/write operations per unit. This is largely possible due to the
distributed architecture that allows finer resource scaling.
Building a Beautiful Data Lakehouse
It’s common to compensate for the respective shortcomings of existing
repositories by running multiple systems, for example, a data lake, several data
warehouses, and other purpose-built systems. However, this process frequently
creates a few headaches. Most notably, data stored in one repository type is
often excluded from analytics run on another, which is suboptimal in terms of
the results. In addition, having multiple systems requires the creation of
expensive and operationally burdensome processes to move data from lake to
warehouse if required. To overcome the data lake’s quality issues, for example,
many often use extract/transform/load (ETL) processes to copy a small subset of
data from lake to warehouse for important decision support and BI applications.
This dual-system architecture requires continuous engineering to ETL data
between the two platforms. Each ETL step risks introducing failures or bugs that
reduce data quality. Second, leading ML systems, such as TensorFlow, PyTorch,
and XGBoost, don’t work well on data warehouses.
How the best CISOs leverage people and technology to become superstars
Exemplary CISOs are also able to address other key pain points that
traditionally flummox good cybersecurity programs, such as the relationships
between developers and application security (AppSec) teams, or how
cybersecurity is viewed by other C-suite executives and the board of
directors. For AppSec relations, good CISOs realize that developer enablement
helps to shift security farther to the so-called left and closer to a piece of
software’s origins. Fixing flaws before applications are dropped into
production environments is important, and much better than the old way of
building code first and running it past the AppSec team at the last minute to
avoid those annoying hotfixes and delays to delivery. But it can’t solve all
of AppSec’s problems alone. Some vulnerabilities may not show up until
applications get into production, so relying on shifting left in isolation to
catch all vulnerabilities is impractical and costly. There also needs to be
continuous testing and monitoring in the production environment, and yes,
sometimes apps will need to be sent back to developers even after they have
been deployed.
TSA Updates Pipeline Cybersecurity Directive to Include Regular Testing
The revised directive, developed with input from industry stakeholders and
federal partners including the Cybersecurity and Infrastructure Security
Agency (CISA) and the Department of Transportation, will “continue the effort
to reinforce cybersecurity preparedness and resilience for the nation’s
critical pipelines”, the TSA said. Developed with input from industry
stakeholders and federal partners, including the Cybersecurity and
Infrastructure Security Agency (CISA) and the Department of Transportation,
the reissued security directive for critical pipeline companies follows the
initial directive announced in July 2021 and renewed in July 2022. The TSA
said that the requirements issued in the previous years remain in place.
According to the 2022 security directive update, pipeline owners and operators
are required to establish and execute a TSA-approved cybersecurity
implementation plan with specific cybersecurity measures, and develop and
maintain a CIRP that includes measures to be taken during cybersecurity
incidents.
What is the cost of a data breach?
"One particular cost that continues to have a major impact on victim
organizations is theft/loss of intellectual property," Glenn J. Nick,
associate director at Guidehouse, tells CSO. "The media tend to focus on
customer data during a breach, but losing intellectual property can devastate
a company's growth," he says. "Stolen patents, engineering designs, trade
secrets, copyrights, investment plans, and other proprietary and confidential
information can lead to loss of competitive advantage, loss of revenue, and
lasting and potentially irreparable economic damage to the company." It's
important to note that how a company responds to and communicates a breach can
have a large bearing on the reputational impact, along with the financial
fallout that follows, Mellen says. "Understanding how to maintain trust with
your consumers and customers is really, really critical here," she adds.
"There are ways to do this, especially around building transparency and using
empathy, which can make a huge difference in how your customers perceive you
after a breach. If you try to sweep it under the rug or hide it, then that
will truly affect their trust in you far more than the breach alone."
Meeting Demands for Improved Software Reliability
“Developers need to fix bugs, address performance regressions, build features,
and get deep insights about particular service or feature level interactions
in production,” he says. That means they need access to necessary data in
views, graphs, and reports that make a difference to their workflows.
“However, this data must be integrated and aligned with IT operators to ensure
teams are working across the same data sets,” he says. Sigelman says IT
operations is a crucial part of an organization’s overall reliability and
quality posture. “By working with developers to connect cloud-native systems
such as Kubernetes with traditional IT applications and systems of record, the
entire organization can benefit from a centralized data and workflow
management pane,” he says. From this point, event and change management can be
combined with observability instruments, such as service level objectives, to
provide not only a single view across the entire IT estate, but to demonstrate
the value of reliability to the entire organization.
How will artificial intelligence impact UK consumers lives?
In the next five years, I expect we may see a rise in new credit options and
alternatives, such as “Predictive Credit Cards,” where AI anticipates a
consumer’s spending needs based on their past behaviour and adjusts the credit
limit or offers tailored rewards accordingly. Additionally, fintechs are
likely to integrate Large Language Models (LLMs) and add AI to digital and
machine-learning powered services. ... Through AI, consumers may also be able
to access a better overview of their finances, specifically personalised
financial rewards, as they would have access to tools to review all
transactions, receive recommendations on personalised spend-based rewards, and
even benchmark themselves against other cardholders in similar demographics or
industry standards. Consumers may also be able to ask questions and get
answers at the click of a button, for example, ‘How much debt do I have
compared to your available credit limits?’ or ‘What’s the best way to use my
rewards points based on my recent purchases?’ improving financial literacy and
potentially providing them with more spending/saving power and personalised
experiences in the long run.
IT Strategy as an Enterprise Enabler
IT Strategy is a plan to create an Information Technology capability for
maximizing the business value for the organization. IT capability is the
Organization ability to meet business needs and improve business processes
using IT based systems. The Objective of IT strategy is to spend least amount
of resources and generates better ROI. It helps in setting the direction for
an IT function in an organization. A successful IT strategy helps the
organizations to reduce the operational bottlenecks, realize TCO and derive
value from technology. ... IT Strategy definition and implementation covers
the key aspects of technology management, planning, governance, service
management, risk management, cost management, human resource management,
hardware and software management, and vendor management. Broadly, IT Strategy
has 5 phases covering Discovery, Assess, Current IT, Target IT and Roadmap.
Idea of IT Strategy is to keep the annual and multiyear plan usual, insert the
regular frequent check-ins along the way. Revisit IT Strategy for every
quarterly or every 6 months to ensure that optimal business value
created.
AI system audits might comply with local anti-bias laws, but not federal ones
"You shouldn’t be lulled into false sense of security that your AI in employment
is going to be completely compliant with federal law simply by complying with
local laws. We saw this first in Illinois in 2020 when they came out with the
facial recognition act in employment, which basically said if you’re going to
use facial recognition technology during an interview to assess if they’re
smiling or blinking, then you need to get consent. They made it more difficult
to do [so] for that purpose. "You can see how fragmented the laws are, where
Illinois is saying we’re going to worry about this one aspect of an application
for facial recognition in an interview setting. ... "You could have been doing
this since the 1960s, because all these tools are doing is scaling employment
decisions. Whether the AI technology is making all the employment decisions or
one of many factors in an employment decision; whether it’s simply assisting you
with information about a candidate or employer that otherwise you wouldn’t have
been able to ascertain without advanced machine learning looking for patterns
that a human couldn’t have fast enough.
Quote for the day:
"A leader should demonstrate his
thoughts and opinions through his actions, not through his words." --
Jack Weatherford
No comments:
Post a Comment