Misled by metrics: 7 KPI mistakes IT leaders make
Metrics present an excellent opportunity for ownership and staff involvement, as
well as continuous improvement and process control. “The key to correctly
interpreting metrics is to engage your whole team and use the metrics to
collectively improve processes,” says Paul Gelter, coordinator of CIO services
at business and technology consulting firm Centric Consulting. When evaluating
metrics, Gelter believes it’s essential to strike a balance between cost,
quality, and service. Cost metrics, for example, could be tracked in completed
tickets per individual, yet ticket quality could be degraded by rework/repeated
tickets. “Service could then be impacted by the response time, backlog, and
uptime,” he notes. It’s all about obtaining an optimal balance. Time really is
money, so don’t squander precious hours scrutinizing irrelevant metrics. Clearly
identify all goals before deciding which metrics to study. In most cases,
metrics that don’t support or reflect future decision options are unnecessary
and, worse yet, distracting and time-wasting.
Cloud security risks remain very human
Researchers noted that the current view on cloud security has shifted the
responsibility from providers to adopters. If you ask the providers that have
always promoted a “shared responsibility” model, they have always required
adopters to take responsibility for security on their side of the equation.
However, if you survey IT workers and rank-and-file users, I’m sure they would
point to cloud providers as the linchpins to good cloud security. It is also
interesting to see that shared technology vulnerabilities, such as denial of
service, communications service providers data loss, and other traditional cloud
security issues ranked lower than in previous studies. Yes, they are still a
threat, but postmortems of breaches reveal that shared technology
vulnerabilities rank much lower on our list of worries. The core message is that
the real vulnerabilities are not as exciting as we thought. Instead, the lack of
security strategy and security architecture now top the list of cloud security
“no-nos.” Coming in second was the lack of training, processes, and checks to
prevent misconfiguration, which I see most often as the root causes of most
security breaches. Of course, these problems have a direct link.
Private 5G growth stymied by pandemic, lack of hardware
"As a network technology, 5G has become more mainstream for consumer usage as
networks have been upgraded," Hays says. "But it hasn't quite taken hold in the
enterprise or for private networks due to a lack of available solutions and
clarity around what use cases will take full advantage of 5G's capabilities."
Having the right use cases is critical, says Arun Santhanam, vice president for
telco at Capgemini Americas. "You want to mow your lawn, so you buy a
lawnmower," Santhanam says. "You don't buy a lawnmower then say, 'Now, what can
I do with it?' But that's the biggest mistake people make when adopting 5G. They
get caught up in it. Now they have a private 5G network – so what do they do
with it?" Enterprises that start out with use cases are much more successful, he
says. "That's why we're recommending a lab environment where these things can be
mocked up." Another challenge that companies can face is scalability. "If
something works in a smaller setup, there's no guarantee that it will work in a
bigger one," he says. Finally, there's the issue of interoperability.
Global file systems: Hybrid cloud and follow-the-sun access
Global file systems work by combining a central file service – typically on
public or private clouds – with local network hardware for caching and to
ensure application compatibility. They do this by placing all the storage in a
single namespace. This will be the single, “gold” copy of all data. Caching
and synching is needed to ensure performance. According to CTERA, one of the
suppliers in the space, a large enterprise could be moving more than 30TB of
data per site. Secondly, the system needs broad compatibility. The global file
system needs to support migration from legacy, on-premise, NAS hardware.
Operating systems and applications need to be able to access the global file
system as easily as they did previously with NFS or SMB. The system also needs
to be easy to use, ideally transparent to end-users, and able to scale. Few
firms will be able to move everything to a new file system at once, so a
global file system that can grow as applications move to it, is vital. ... As
a cloud-based service, global file systems appeal to organisations that need
to share information between sites – or with users outside the business
perimeter in use cases that were often bolstered during the pandemic.
Google Launches Advanced API Security to Combat API Threats
API security teams also can use Advanced API Security’s pre-configured rules
to identify malicious bots within API traffic. “Each rule represents a
different type of unusual traffic from a single IP address,” Ananda wrote. “If
an API traffic pattern meets any of the rules, Advanced API Security reports
it as a bot.” This service is targeted at financial services institutions,
which rely heavily on Google Cloud—four out of the top five U.S. banks ranked
by the Federal Reserve are already using Apigee, Google noted in the blog
post. The service is also designed to speed up the process of identifying data
breaches by identifying bots that successfully resulted in the HTTP 200 OK
success status response code. “Organizations in every region and industry are
developing APIs to enable easier and more standardized delivery of services
and data for digital experiences,” Ananda wrote. “This increasing shift to
digital experiences has grown API usage and traffic volumes. However, as
malicious API attacks also have grown, API security has become an important
battleground over business risk.”
Friedman said the new AI system represented a breakthrough in the third
revolution of software development: the use of AI in coding. As an AI pair
programmer, it provides code-completion functionality and suggestions
similar to IntelliSense/IntelliCode, though it goes beyond those Microsoft
offerings with Codex, a new AI system developed by Microsoft partner OpenAI.
... Regarding the aforementioned Reddit comment, the reader had more to say
on the question of AI replacing dev jobs: Well this specifically, not even
close. To use this effectively you have to deeply understand every line of
code. Using it also requires you to have been able to write whatever snippet
was autocompleted yourself. But if it works well, it would be an amazing
productivity tool that reduces context switching. On the other hand, that
originally spent looking at documentation reduces you to more fully
understand the library, so for more complex work, it might have hurt in the
long run since you didn't look at the docs.
Chip-to-Cloud IoT: A Step Toward Web3
Reliable software design is essential for IoT devices and other
internet-connected devices. It keeps hackers from stealing your
identification or duplicating your device for their ulterior motives.
Chip-to-cloud delivers on all fronts. These chipset characteristics confer
an extra security advantage. Each IoT node is cryptographically unique,
making it nearly impossible for a hacker to impersonate it and access the
more extensive corporate network to which it is connected. Chip-to-cloud
speeds things up by eliminating the need for traffic delays between the
logic program and the edge nodes that are ready to take action on the
information. The chip-to-cloud architecture of the internet-of-things is
secure by design. New tools are being developed to provide bespoke and older
equipment with data mobility capabilities, just like the current IoT.
Nevertheless, chip-to-cloud chipsets are always connected to the cloud. As a
result, the availability of assets and the speed of digital communication
across nodes, departments and facilities will be significantly improved.
Chip-to-cloud IoT is a significant step forward in the evolution of the IoT
toward Web3.
IoT in Agriculture: 5 IoT Applications for Smart Agriculture
A high-tech, capital-intensive method of growing food sustainably and cleanly
for people is known as intelligent farming. It is a component of contemporary
ICT (Information and Communication Technologies) applied to agriculture. A
system is created in IoT-based smart farming to automate the irrigation system
and monitor the agricultural field using sensors (light, humidity,
temperature, soil moisture, etc.). Farmers may monitor the condition of their
lots from any location. Smart farming that is IoT-based is significantly more
efficient than traditional farming. ... One of the most well-known Internet of
Things applications in agriculture is precision agriculture or “precision
farming.” Precision agriculture (PA) is a method of farm management that
leverages information technology (IT) to guarantee that crops and soil receive
the exact nutrients they require for maximum health and productivity. PA aims
to ensure economic success, environmental preservation, and sustainability by
assessing data produced by sensors and responding appropriately.
My Technical Writing Journey
My first general writing tip is to find a problem that bothers you. As an
engineer, our day-to-day life should be full of questions. We can’t live
without StackOverflow :)). If you can, then find a new job because it’s not a
challenging job any more. The reason to find a problem close to you is that
you know what is the core of this problem that you and other people like you
want to get solved. You will show full empathy for your audience. ... The
other approach is to narrow down your original scope when you have a broad
idea. You are writing a blog post, not a book. Don’t make too ambitious goals.
Otherwise, you will either make the article superficial which doesn’t create
too much value, or the article will be too long to read. What I like is to
find a unique entry point of the topic. For example, in the article How to
Write User-friendly Command Line Interfaces in Python, I focus on how to make
your CLI application more user-friendly. In 5 Python Tips to Work with
Financial Data, I tied Python tips to only finance data. In this way, you
always have a clear target reader group.
The Compounding (Business) Value of Composable Ecosystems
For anyone that has worked with end-user companies (companies that use, but
don’t sell software) before, you know that while many of the broad challenges
may be the same (I need to run containers), they each bring their own quirks
(but we need static egress gateways for our firewall). A composable system
helps tackle these common challenges while still allowing the choice to select
components that meet specific requirements. The cloud native landscape is so
large for exactly this reason, end users need choice to meet their precise
business needs. Now that we understand a little more about what composability
is, let’s see how it applies to the real world. ... Composability isn’t just
about what projects and products your stack is made of, it also includes the
composability of the ecosystem as a whole. The value of an ecosystem is not
just the sum of its parts, but rather the interrelationships between the parts
and how they can be assembled to meet the needs of the ecosystem and end
users. The ideas, people, and tools that make up an ecosystem can be
composable too.
Quote for the day:
"It is, after all, the responsibility of the expert to operate the familiar
and that of the leader to transcend it." -- Henry A. Kissinger
No comments:
Post a Comment