6 Reasons Why Internal Data Centers Won’t Disappear
Most companies are moving to a hybrid computing model, which is a mix of on
premises and cloud-based IT. The value of a hybrid computing approach is that
it gives organizations agility and flexibility. You have the option of
insourcing or outsourcing systems whenever there is a business or technology
need do so. By adopting a hybrid strategy, companies can also take advantage
of the best strategic, operational and cost options. In some cases, a “best
choice” might be to outsource to the cloud. In other cases, an in-house option
might be preferable. Here is an example: A large company with a highly
customized ERP system from a well-known vendor acquires a smaller company.
Operationally, the desire is to move the newly acquired, smaller company onto
the enterprise in-house ERP system, but there are so many customized programs
and interfaces that the company decides instead to move the new company onto a
cloud-based, generic version of the software. The advantage is the newly
acquired company gets acclimated to the features and functions of the ERP
system. Going forward, the parent company has the option of either migrating
the new company over to the corporate ERP system, and being able to perform
this migration without being rushed, or deciding to join the newly acquired
company by migrating enterprise ERP to the cloud .
What is cryptography? How algorithms keep information secret and safe
Secret key cryptography, sometimes also called symmetric key, is widely used
to keep data confidential. It can be very useful for keeping a local hard
drive private, for instance; since the same user is generally encrypting and
decrypting the protected data, sharing the secret key is not an issue. Secret
key cryptography can also be used to keep messages transmitted across the
internet confidential; however, to successfully make this happen, you need to
deploy our next form of cryptography in tandem with it. ... In public key
cryptography, sometimes also called asymmetric key, each participant has two
keys. One is public, and is sent to anyone the party wishes to communicate
with. That's the key used to encrypt messages. But the other key is private,
shared with nobody, and it's necessary to decrypt those messages. To use a
metaphor: think of the public key as opening a slot on a mailbox just wide
enough to drop a letter in. You give those dimensions to anyone who you think
might send you a letter. The private key is what you use to open the mailbox
so you can get the letters out. The mathematics of how you can use one key to
encrypt a message and another to decrypt it are much less intuitive than the
way the key to the Caesar cipher works.
Mitigating Business Risks in Your 5G Deployment
For 5G networks to thrive, the underlying architecture will be distributed in
the cloud and will no longer be dependent on dedicated appliances. The
corresponding implementation and deployment of the carriers’ networks will
evolve to expand capacity, reduce latency, lower costs and reduce necessary
power requirements. To reinforce this open environment, organizations using 5G
will have to virtualize their network functions, resulting in less control
over the physical elements of the networks in exchange for the 5G benefits in
infrastructure. Services are also no longer restricted to service providers’
networks and can originate from external network domains. This means that
services can rely on physically closer, virtualized network resources to the
connected device for more efficient delivery. 5G architectures will rely on a
software-defined networking/network functions virtualization
(SDN/NFV)-supported foundation for their transition to the cloud. This change
to the network infrastructure leads to corresponding deviations to the
cyberattack threat landscape. 5G will utilize the concept of network slicing
to enable service providers to “slice” portions of a spectrum to offer
specialized services for specific device types, all the while remaining in the
same physical infrastructure.
Microsoft fights botnet after Office 365 malware attack
According to filed court documents, Microsoft sought permission to take over
domains and servers belonging to the malicious Russia-based group. It also
wanted legal assent to block IP addresses associated with the plot and prevent
the entities behind it from purchasing or leasing servers. The requests were
part of a grander plan of action to destroy data stored in the hackers'
systems. The intention was first to block access to servers controlling over 1
million infected machines. This move would be a crucial step in halting
control of over an additional 250 million breached email addresses. Microsoft
has said that Trickbot’s strategy was mostly successful because it used a
custom third-party Office 365 app. Tricking users into installing it allowed
perpetrators to bypass passwords instead of relying on the OAuth2 token.
Through this technique, they could access compromised Microsoft 365 user
accounts and sensitive data associated with them, such as email content and
contact lists. In the court documents, Microsoft laments that Trickbot used
authentic-looking Microsoft email addresses and other company information to
malign its clients. It argues that the network used its name and
infrastructure for malicious purposes, thereby tarnishing its image.
Breaking Serverless on Purpose with Chaos Engineering
“You should stop when something goes wrong, even if you are not running it in
production. You should stop just to understand how you are going to roll back
when such things happen,” Samdan said. He echoed what Liz Fong-Jones said in
her ChaosConf talk: that you should absolutely intentionally plan when you
have your chaos experiments and let everyone know ahead. “You don’t need to
surprise other people. You don’t need to surprise other departments. And, most
importantly, in production, your customers should know about it,” he said. So
if something goes terribly wrong, they aren’t worried because you talked about
it ahead and you already had a plan to roll back which you also shared with
them. Chaos gets way more complicated in serverless environments, which
are highly distributed and event-driven. Risks with serverless tend to come
from the services you don’t have insight or control over. Essentially,
serverless is chaotic at its heart. With serverless you inherit a whole new
set of failures, within its many resources ... He says a common fix for
serverless issues is to aim for asynchronous communication whenever possible
and then properly tune synchronous timeouts. Other serverless fixes include
putting circuit breakers in place and using exponential backoff to find an
acceptable rate of pacing retransmissions.
Audit .NET/.NET Core Apps with Audit.NET and AWS QLDB
With the flow of a request through the system new information is added to the
audit event, like the component name, the identity or the user name of the
executing request, how was the data before it was altered, how is data after
modification, timestamps, machine names, a common identifier to correlate the
request through the components and any other type of information that might be
needed to identify the request with other systems. This operation is vital for
some business, so often is considered part of the transaction: the
cancellation of a contract is considered successful if also there is a record
in the audit log trail. One could rely on the ILogger interfaces to implement
this requirement, but there are few problems: it could be easily turned off,
failing to send a message to log won't crash the application and it does not
have specialized primitives for audit logging. ... Audit.NET is an extensible
framework to audit executing operations in .NET and .NET Core. It comes with
two types of extensions: the data providers (or the data sinks) and the
interaction extensions. The data providers are used to store the audit events
into various persistent storages and the interaction extensions are used to
create specialized audit events based on the executing context like Entity
Framework, MVC, WCF, HttpClient, and many others.
WFH has left workers feeling abandoned.
One in three employees admitted that being away from the office had lowered
their morale, with respondents reporting that they feel distracted during
their work day, and easily stressed out at work. What's more: there seems to
be consensus that employers have not gone far enough in supporting their
workforce. Less than a quarter of employees in the US and Europe received
guidance from their employer on working remotely on topics ranging from tips
on new ways to work, to data security best practices. But despite the
potential difficulties of working from home day-in, day-out, HP's research
found that office workers are keeping an eye on the bigger picture – and that
overall, respondents seemed positive about the future. The majority of
employees surveyed agreed that the new ways of working caused by the crisis
would allow them to change their work environments for the better. Over the
past few months, workers have been gauging what the future holds for their
nine-to-five, and preparing accordingly. The survey shows that many employees
have identified continuous learning and upskilling as key to their success,
and have lost no time in re-training themselves. From leadership skills
to foreign languages through IT and tech support knowledge, almost six in ten
respondents said that they were currently learning at least one new skill,
often through free online programs.
Twitter hack probe leads to call for cybersecurity rules for social media giants
The report concludes this is a problem U.S. lawmakers need to get on and
tackle stat — recommending that an oversight council be established (to
“designate systemically important social media companies”) and an
“appropriate” regulator appointed to ‘monitor and supervise’ the security
practices of mainstream social media platforms. “Social media companies have
evolved into an indispensable means of communications: more than half of
Americans use social media to get news, and connect with colleagues, family,
and friends. This evolution calls for a regulatory regime that reflects social
media as critical infrastructure,” the NYSDFS writes, before going on to point
out there is still “no dedicated state or federal regulator empowered to
ensure adequate cybersecurity practices to prevent fraud, disinformation, and
other systemic threats to social media giants”. “The Twitter Hack
demonstrates, more than anything, the risk to society when systemically
important institutions are left to regulate themselves,” it adds. “Protecting
systemically important social media against misuse is crucial for all of us —
consumers, voters, government, and industry. The time for government action is
now.”
Google, Intel Warn on ‘Zero-Click’ Kernel Bug in Linux-Based IoT Devices
The flaw, which Google calls “BleedingTooth,” can be exploited in a
“zero-click” attack via specially crafted input, by a local, unauthenticated
attacker. This could potentially allow for escalated privileges on affected
devices. “A remote attacker in short distance knowing the victim’s bd
[Bluetooth] address can send a malicious l2cap [Logical Link Control and
Adaptation Layer Protocol] packet and cause denial of service or possibly
arbitrary code execution with kernel privileges,” according to a Google post
on Github. “Malicious Bluetooth chips can trigger the vulnerability as well.”
The flaw (CVE-2020-12351) ranks 8.3 out of 10 on the CVSS scale, making it
high-severity. It specifically stems from a heap-based type confusion in
net/bluetooth/l2cap_core.c. A type-confusion vulnerability is a specific bug
that can lead to out-of-bounds memory access and can lead to code execution or
component crashes that an attacker can exploit. In this case, the issue is
that there is insufficient validation of user-supplied input within the BlueZ
implementation in Linux kernel. Intel, meanwhile, which has placed
“significant investment” in BlueZ, addressed the security issue in a Tuesday
advisory, recommending that users update the Linux kernel to version 5.9 or
later.
There’s no better time to join the quantum computing revolution
It’s an exciting time to be in quantum information science. Investments are
growing across the globe, like the recently announced U.S. Quantum Information
Science Research Centers, that bring together the best of the public and private
sectors to solve the scientific challenges on the path to a commercial-scale
quantum computer. While there’s increased research investment worldwide, there
are not yet enough skilled developers, engineers, and researchers to take
advantage of this emerging quantum revolution. Here’s where you come in.
There’s no better time to start learning about how you can benefit from quantum
computing, and solve currently unsolvable questions in the future. Here are some
of the resources available to start your journey. Many developers,
researchers, and engineers are intrigued by the idea of quantum computing, but
may not have started because perhaps they don’t know how to begin, how to apply
it, or how to use it in their current applications. We’ve been listening to the
growing global community and worked to make the path forward easier. Take
advantage of these free self-paced resources to learn the skills you need to get
started with quantum.
Quote for the day:
"Tomorrow's leaders will not lead dictating from the front, nor pushing from the back. They will lead from the centre - from the heart" -- Rasheed Ogunlaru
No comments:
Post a Comment