Building Higher-Quality Software With Open Source CD
Prior to the rise of open source CD solutions, companies often relied on point
automation using scripts. These could improve efficiency a bit, but when
companies moved from the monolithic architecture of a mainframe or on-premises
servers to a microservices-based production environment, the scripts could not
be easily adapted or scaled to cope with the more complex environment. This led
to the formulation of continuous delivery orchestration solutions that could
ensure code updates would flow to their destination in a repeatable, orderly
manner. Two highly popular open source CD solutions have emerged, Spinnaker and
Argo. Spinnaker was developed by Netflix and extended by Google, Microsoft and
Pivotal. It was made available on GitHub in 2015. Spinnaker creates a “paved
road” for application delivery, with guardrails to ensure only valid
infrastructure and configurations reach production. It facilitates the creation
of pipelines that represent a software delivery process. These pipelines can be
triggered in a variety of ways, including manually, via a cron expression, at
the completion of a Jenkins Job or other pipeline and other methods.
Technical Debt is Quantifiable as Financial Debt: an Impossible Thing for Developers
There are many things about technical debt that can be quantified. Henney
mentioned that we can list off and number specific issues in code and, if we
take the intentional sense in which technical debt was originally introduced, we
can track the decisions that we have made whose implementations need to be
revisited. If we focus on unintentional debt, we can look at a variety of
metrics that tell us about qualities in code. There’s a lot that we can quantify
when it comes to technical debt, but the actual associated financial debt is not
one of them, as Henney explained: The idea that we can run a static analysis
over the code and come out with a monetary value that is a meaningful
translation of technical debt into a financial debt is both a deep
misunderstanding of the metaphor – and how metaphors work – and an
impossibility. According to Henney, quantifying how much financial debt is
present in the code doesn’t work. At the very least, we need a meaningful
conversion function that takes one kind of concept, e.g., "percentage of
duplicate code" or "non-configurable database access", and translates it to
another, e.g., euros and cents
How industrial IoT is forcing IT to rethink networks
IIoT is redefining the types of data that enterprises use, and how networks
process this data. For example, an IIoT network primarily transmits and
processes unstructured data, not fixed record transactional data. In contrast,
the corporate network processes data that is far more predictable, digestible
and manageable. The bulk and the traffic of IIoT data virtually makes it a
necessity to implement a single, private, dedicated network to each
manufacturing facility for use with its IoT. Security is also a concern, because
the networks that operate on the edges of the enterprise must often be
maintained and administered by non-IT personnel who don’t have training in IT
security practices. It’s not uncommon for someone on a production floor to shout
a password to another employee so they can access a network resource — nor is it
uncommon for someone on the floor to admit another individual into a network
equipment cage that is supposed to be physically secured and accessible by only
a few authorized personnel.
Cloud architects are afraid of automation
As humans, we’re just not that good. While we have experience driving cars and
can look out the front window, we don’t have a perfect understanding of current
data, past data, and what this data likely means in the operation and driving of
the vehicle. Properly configured automation systems do. For the same reasons
that we are anxious when our cars drive away without us actively turning the
wheel, we are slow to adopt automation for cloud deployments. Those charged with
making core decisions about automating security, operations, finops, etc., are
actively avoiding automation, largely because they are uncomfortable with
critical processes being carried out without humans looking on. I get it. At the
end of the day, automation is a leap of faith that the automated systems will
perform better than humans. I understand the concern that they won’t work. The
adage is true: “To really screw things up requires a computer.” If you make a
mistake in setting these systems up, you can indeed do real damage. So, don’t do
that. However, as many people also say: “The alternative sucks.” Not using
automation means you’re missing out on approaches and mechanisms to run your
cloud systems cheaper and more efficiently
Cybersecurity, cloud and coding: Why these three skills will lead demand in 2023
As the scale and growth of software development accelerates, and with ongoing AI
developments in programming and engineering, the role requirements of software
development also look set to change. "AI/ML are changing the world of
programming much like the calculator and the computer changed the world," says
Stormy Peters, VP of Communities at GitHub. "These technological advancements
are taking care of a lot of the mundane, grunt work that developers once had to
devote all their time to. Development looks different now." ... As we enter 2023
and software development remains at the heart of business strategies,
problem-solving, critical thinking and other human skills will prove integral.
"While emerging technologies will increasingly enable them to stay in the flow
and solve challenging problems, the technicalities in being able to program,
engineer, and develop code through a high level understanding of AI, DevOps, and
programming languages will also stay central in importance to the discipline,"
she adds.
How to effectively compare storage system performance
The best metrics to compare are the ones most applicable to the applications and
workloads you will run. If the application is an Oracle database, the
performance metric most applicable is 8 KB mixed read/write random IOPS. When
the vendor only provides the 4 KB variation, there is a way to roughly estimate
the 8 KB results -- simply divide the 4 KB results in half. If the vendor
objects, ask for actual 8 KB test results. Use this same simple math for other
I/O sizes. Throughput is somewhat more difficult to standardize, especially if
vendors don't supply it. You can roughly calculate it by multiplying the
sequential read IOPS by the size of the I/O. Latency is the most difficult to
standardize, especially when vendors measure it differently. There are many
factors that affect application latency, such as storage system load, storage
capacity utilization, storage media, storage DRAM caching, storage network
congestion, application server load, application server utilization and
application server contention. The most important question to ask is how the
vendor measured the latency, under what loads and from where.
8 secrets of successful IT freelancers
An often-overlooked skill is having the knowledge, courage, and ability to steer
the client in the right direction. “The customer wants to use the freelancer’s
experience and proactivity, therefore it’s very important that the IT freelancer
states his or her true opinion when he or she thinks that the customer is moving
in the wrong direction,” says Soren Rosenmeier, CEO of Right People Group, a
firm that matches clients with IT and business consultants. Don’t jump the gun,
however. Before offering any crucial advice, it’s important to have a complete
understanding of the issue at-hand. “There might also be a lot of other factors
… in the organization that the IT freelancer is unaware of,” Rosenmeier notes.
Therefore, prior to offering a suggestion, it’s important to first listen to
exactly what the client wants. If the IT freelancer is honest and upfront, the
client will receive the benefit of hiring a highly experienced expert, including
insights from all the experience the freelancer has gained by working with many
other organizations. “At the same time, the customer gets the simplicity and the
execution that they want from an external expert that’s hired in to do a
specific job,” Rosenmeier says.
In a managed service model, who is responsible and accountable for data?
When it comes to ensuring compliance, since accountability always lies with the
business, it is essential to ensure that the MSP is compliant before outsourcing
any data management functions. However, before this can be done, it is essential
to establish what exactly it is that needs to be complied with, which is often
the most difficult question, with a myriad of regulations and legislation being
applicable depending on the sector and regions the business operates in. There
are two pillars to consider when engaging with an MSP in regards responsibility
for data management, one being the data availability and recovery, and the
second, the retention of data, however the requirements for compliance, and
ultimately accountability, in each will depend on the individual business. This
means that before your data can be deemed compliant, you need to understand what
that means for your business and have a framework in place that outlines this.
What the experts say about the cybersecurity skills gap
In terms of the skills that are needed, all three cybersecurity leaders agreed
that there are various technical skills necessary, similar to any IT role.
However, Killian pointed out that not every cybersecurity role is purely a
technical one. “Technical skills are usually easier to learn than other
important skills like curiosity, ability to ‘play’ in the grey – security issues
are rarely obvious ‘yes or no’ problems to solve – and the ability to build
relationships with stakeholders. So, unless technical skills are required for
the role at hand, they should be prioritised appropriately in job postings,” she
said. ... Naidoo reaffirmed that great attitudes and high aptitudes are
essential as “technical skills can be taught”. However, she also said it’s
important to keep on top of how the tech industry is evolving. “Whatever
technical skills are needed in the industry, a corresponding security skill is
necessary to secure that technology. So, whether that’s blockchain, quantum or
artificial intelligence, or even traditional functions like networks, operating
systems and databases, one needs to understand these technologies in order to
properly secure them.”
How organisations can right-size their data footprint
“Going on a data diet can be healthy. Cutting out all that junk data that bloats
our systems costs us money, raises our data risks and distracts us from the
nutritious data that will help us grow. Sometimes, less is truly more.” To
reduce data risks and identify useful data, organisations can create synthetic
data, which is artificially created data with similar attributes to the original
data. According to Gartner, synthetic data will enable organisations to avoid
70% of privacy violation sanctions. Parker said: “If you have sensitive customer
data that you want to use but you can’t, you could replace it with synthetic
data without losing any of the insights it can deliver.” She added that this
could also facilitate data sharing across countries and in industries such as
healthcare and financial services. In the UK, for example, the Nationwide
Building Society used its transaction data to generate synthetic datasets that
could be shared with third-party developers without risking customer privacy,
she said. Parker said synthetic data will also enable organisations to plug gaps
in the actual data used by artificial intelligence (AI) models.
Quote for the day:
"Leverage is the ability to apply
positive pressure on yourself to follow through on your decisions even when it
hurts." -- Orrin Woodward
No comments:
Post a Comment