Backup lessons learned from 10 major cloud outages
So, what’s the most critical lesson here? Back up your cloud data! And I don’t
just mean relying on your provider’s built-in backup services. As we saw with
Carbonite, StorageCraft and OVH, those backups can evaporate along with your
primary data if disaster strikes. You need to follow the 3-2-1 rule religiously:
keep at least three copies of your data, on two different media, with one copy
off-site. And in the context of the cloud, “different media” means not storing
everything in the same type of system; use different failure domains. Also,
“off-site” means in a completely separate cloud account or, even better, with a
third-party backup provider. But it’s not just about having backups; it’s about
having the right kind of backups. Take the StorageCraft incident, for example.
They lost customer backup metadata during a botched cloud migration, rendering
those backups useless. This hammers home the importance of not only backing up
your primary data but also maintaining the integrity and recoverability of your
backup data itself.
4 Ways to Control Cloud Costs in the Age of Generative AI
First and foremost, prioritize building a cost-conscious culture within your
organization. IT professionals are presented with some serious challenges to get
spending under control and identify value where they can. Educating teams on
cloud cost management strategies and fostering accountability can empower them
to make informed decisions that align with business objectives. Organizations
are increasingly implementing FinOps frameworks and strategies in their cloud
cost optimization efforts as well. This promotes a shared responsibility for
cloud costs across IT teams, DevOps, and other cross-functional teams. ...
Implementing robust monitoring and optimization tools is essential. By
leveraging analytics and automation, your organization can gain real-time
insights into cloud usage patterns and identify opportunities for optimization.
Whether it's rightsizing resources, implementing cost allocation tags, or
leveraging spot instances, proactive optimization measures can yield substantial
cost savings without sacrificing performance.
Gen AI can be the answer to your data problems — but not all of them
One use case is particularly well suited for gen AI because it was specifically
designed to generate new text. “They’re very powerful for generating synthetic
data and test data,” says Noah Johnson, co-founder and CTO at Dasera, a data
security firm. “They’re very effective on that. You give them the structure and
the general context, and they can generate very realistic-looking synthetic
data.” The synthetic data is then used to test the company’s software, he says.
... The most important thing to know is that gen AI won’t solve all of a
company’s data problems. “It’s not a silver bullet,” says Daniel Avancini, chief
data officer at Indicium, an AI and data consultancy. If a company is just
starting on its data journey, getting the basics right is key, including
building good data platforms, setting up data governance processes, and using
efficient and robust traditional approaches to identifying, classifying, and
cleaning data. “Gen AI is definitely something that’s going to help, but there
are a lot of traditional best practices that need to be implemented first,” he
says.
Scores of Biometrics Bugs Emerge, Highlighting Authentication Risks
Biometrics generally are regarded as a step above typical authentication
mechanisms — that extra James Bond-level of security necessary for the most
sensitive devices and the most serious environments. ... The critical nature of
the environments in which these systems are so often deployed necessitates that
organizations go above and beyond to ensure their integrity. And that job takes
much more than just patching newly discovered vulnerabilities. "First, isolate a
biometric reader on a separate network segment to limit potential attack
vectors," Kiguradze recommends. Then, "implement robust administrator passwords
and replace any default credentials. In general, it is advisable to conduct
thorough audits of the device’s security settings and change any default
configurations, as they are usually easier to exploit in a cyberattack." "There
have been recent security breaches — you've probably read about them,"
acknowledges Rohan Ramesh, director of product marketing at Entrust. But in
general, he says, there are ways to protect databases with hardware security
modules and other advanced encryption technologies.
Mastering the tabletop: 3 cyberattack scenarios to prime your response
The ransomware CTEP explores aspects of an organization’s operational
resiliency and poses key questions aimed at understanding threats to an
organization, what information the attacker leverages, and how to conduct risk
assessments to identify specific threats and vulnerabilities to critical
assets. Given that ransomware attacks focus on data and systems, the scenario
asks key questions about the accuracy of inventories and whether there are
resources in place dedicated to mitigating known exploited vulnerabilities on
internet-facing systems. This includes activities such as not just having
backups, but their retention period and an understanding of how long it would
take to restore from backups if necessary, in events such as a ransomware
attack. Questions asked during the tabletop also include a focus on assessing
zero-trust architecture implementation or lack thereof. This is critical,
given that zero trust emphasizes least-permissive access control and network
segmentation, practices that can limit the lateral movement of an attack and
potentially keep it from accessing sensitive data, files, and systems.
10 Years of Kubernetes: Past, Present, and Future
There is little risk (nor reason) that Wasm will in some way displace
containers. WebAssembly’s virtues — fast startup time, small binary sizes, and
fast execution — lend strongly toward serverless workloads where there is no
long-running server process. But none of these things makes WebAssembly an
obviously better technology for long-running server process that are typically
encapsulated in containers. In fact, the opposite is true: Right now, few
servers can be compiled to WebAssembly without substantial changes to the
code. When it comes to serverless functions, though, WebAssembly’s
sub-millisecond cold start, near-native execution speed, and beefy security
sandbox make it an ideal compute layer. If WebAssembly will not displace
containers, then our design goal should be to complement containers. And
running WebAssembly inside of Kubernetes should involve the deepest possible
integration with existing Kubernetes features. That’s where SpinKube comes in.
Packaging a group of open source tools created by Microsoft, Fermyon, Liquid
Reply, SUSE, and others, SpinKube plumbs WebAssembly support directly into
Kubernetes. A WebAssembly application can use secrets, config maps, volume
mounts, services, sidecars, meshes, and so on.
Cultivating a High Performance Environment
At the organizational level, how is a culture that supports high performers
put in place and how does it remain in place? The simple answer is that
cultural leaders must set the foundation. A great example is Gary Vaynerchuk.
As CEO of his organization, he embodies many high performing qualities we’ve
identified as power skills. He is the primary champion (Sponsor) for this
culture, hires leaders (resources) who make up a group of champions, and these
leaders hire others (teams) who expand the group of champions. Tools, tactics,
and processes are put in place by all champions at all levels to support,
build, and maintain the culture. Those who don’t resonate with high
performance are supported as best and as long as possible. If they decide not
to support the culture, they are facilitated to leave in a supportive manner.
As organizations change and embrace true high performance (power skills),
authentic high performers will proliferate. Organizations don’t really have a
choice about whether to move to the new paradigm. This is the way now and of
the future. Steve Jobs said it well: “We don’t hire experts to tell them what
to do. We hire experts to tell us what to do.”
Top 10 Use Cases for Blockchain
Smart contracts on the blockchain can also automate derivate contract
execution based on pre-defined rules while automating dividend payments.
Perhaps most notable, is its ability to tokenise traditional assets such as
stocks and bonds into digital securities – paving the way for fractional
ownership. ... Blockchain can also power CBDCs – a digital form of central
bank money that offers unique advantages for central banks at retail and
wholesale levels, from enhanced financial access for individuals to greater
infrastructural efficiency for intermediate settlements. With distributed
ledger transactions (DLT), CBDCs can be issued, recorded and validated in a
decentralised way. ... Blockchain technology is becoming vital in the
cybersecurity space too. When it comes to digital identities, blockchain
enables the concept of self-sovereign identity (SSI), where individuals have
complete control and ownership over their digital identities and personal
data. Rather than relying on centralised authorities like companies or
governments to issue and manage identities, blockchain enables users to create
and manage their own.
Encryption as a Cloud-to-Cloud Network Security Strategy
Like upper management, there are network analysts and IT leaders who resist
using data encryption. They view encryption as overkill—in technology and in
the budget. Second, they may not have much first-hand experience with data
encryption. Encryption uses black-box arithmetic algorithms that few IT
professionals understand or care about. Next, if you opt to use encryption,
you have to make the right choice out of many different types of encryption
options. In some cases, an industry regulation may dictate the choice of
encryption, which simplifies the choice. This can actually be a benefit on the
budget side because you don't have to fight for new budget dollars when the
driver is regulatory compliance. However, even if you don't have a regulatory
requirement for the encryption of data in transit, security risks are growing
if you run without it. Unencrypted data in transit can be intercepted by
malicious actors for purposes of identity theft, intellectual property theft,
data tampering, and ransomware. The more companies move into a hybrid
computing environment that operates on-premises and in multiple clouds, the
greater their risk since more data that is potentially unprotected is moving
from point to point over this extended outside network.
Automated Testing in DevOps: Integrating Testing into Continuous Delivery
Automated testing skilfully diverts ownership responsibilities to the
engineering team. They can prepare test plans or assist with the procedure
alongside regular roadmap feature development and then complete the execution
using continuous integration tools. With the help of an efficient automation
testing company, you can reduce the QA team size and let quality analysts
focus more on vital and sensitive features. ... The major goal of continuous
delivery is to deliver new code releases to customers as fast as possible.
Suppose there is any manual or time-consuming step within the delivery
process. In that case, automating delivery to users becomes challenging rather
than impossible. Continuous development can be an effective part of a greater
deployment pipeline. It is a successor to and also relies on continuous
integration. Continuous integration is entirely responsible for running
automated tests against new code changes and verifying whether new changes are
breaking new features or introducing new bugs. Continuous delivery takes place
once the CI step passes the automated test plan.
Quote for the day:
"If you really want the key to
success, start by doing the opposite of what everyone else is doing." --
Brad Szollose
No comments:
Post a Comment