The importance of connectivity in IoT
There is no point in having IoT if the connectivity is weak. Without reliable
connectivity, the data from sensors and devices, which are intended to be
collected and analysed in real-time, might end up being delayed when they are
eventually delivered. In healthcare, in real-time, connected devices monitor
the vital signs of the patient in an intensive-care ward and alert the
physician to any observations that are outside of the specified limits.
... The future evolution of connectivity technologies will combine with
IoT to significantly expand its capabilities. The arrival of 5G will enable
high-speed, low-latency connections. This transition will usher in IoT systems
that were previously impossible, such as self-driving vehicles that
instantaneously analyse vehicle states and provide real-time collision
avoidance. The evolution of edge computing will bring data-processing closer
to the edge (the IoT devices), thereby significantly reducing latency and
bandwidth costs. Connectivity underpins almost everything we see as important
with IoT – the data exchange, real-time usage, scale and interoperability we
access in our systems.
Aren’t We Transformed Yet? Why Digital Transformation Needs More Work
When it comes to enterprise development, platforms alone can’t address the
critical challenge of maintaining consistency between development, test,
staging, and production environments. What teams really need to strive for is
a seamless propagation of changes between environments made production-like
through synchronization and have full control over the process. This control
enables the integration of crucial safety steps such as approvals, scans, and
automated testing, ensuring that issues are caught and addressed early in the
development cycle. Many enterprises are implementing real-time visualization
capabilities to provide administrators and developers with immediate insight
into differences between instances, including scoped apps, store apps,
plugins, update sets, and even versions across the entire landscape. This
extended visibility is invaluable for quickly identifying and resolving
discrepancies before they can cause problems in production environments. A
lack of focus on achieving real-time multi-environment visibility is akin to
performing a medical procedure without an X-ray, CT, or MRI of the
patient.
Why Staging Doesn’t Scale for Microservice Testing
So are we doomed to live in a world where staging is eternally broken? As
we’ve seen, traditional approaches to staging environments are fraught with
challenges. To overcome these, we need to think differently. This brings us to
a promising new approach: canary-style testing in shared environments. This
method allows developers to test their changes in isolation within a shared
staging environment. It works by creating a “shadow” deployment of the
services affected by a developer’s changes while leaving the rest of the
environment untouched. This approach is similar to canary deployments in
production but applied to the staging environment. The key benefit is that
developers can share an environment without affecting each other’s work. When
a developer wants to test a change, the system creates a unique path through
the environment that includes their modified services, while using the
existing versions of all other services. Moreover, this approach enables
testing at the granularity of every code change or pull request. This means
developers can catch issues very early in the development process, often
before the code is merged into the main branch.
A world-first law in Europe is targeting artificial intelligence. Other countries can learn from it
The act contains a list of prohibited high-risk systems. This list includes AI
systems that use subliminal techniques to manipulate individual decisions. It
also includes unrestricted and real-life facial recognition systems used by by
law enforcement authorities, similar to those currently used in China. Other
AI systems, such as those used by government authorities or in education and
healthcare, are also considered high risk. Although these aren’t prohibited,
they must comply with many requirements. ... The EU is not alone in taking
action to tame the AI revolution. Earlier this year the Council of Europe, an
international human rights organisation with 46 member states, adopted the
first international treaty requiring AI to respect human rights, democracy and
the rule of law. Canada is also discussing the AI and Data Bill. Like the EU
laws, this will set rules to various AI systems, depending on their risks.
Instead of a single law, the US government recently proposed a number of
different laws addressing different AI systems in various sectors. ... The
risk-based approach to AI regulation, used by the EU and other countries, is a
good start when thinking about how to regulate diverse AI technologies.
Building constructive partnerships to drive digital transformation
The finance team needs to have a ‘seat at the table’ from the very beginning
to overcome these challenges and effect successful transformation. Too often,
finance only becomes involved when it comes to the cost and financing of the
project, and when finance leaders do try to become involved, they can have
difficulty gaining access to the needed data. This was recently confirmed by
members of the Future of Finance Leadership Advisory Group, where almost half
of the group polled (47%) noted challenges gaining access to needed data. As
finance professionals understand the needs of stakeholders within the
business, they are in the best position to outline what is needed for IT to
create an effective, efficient structure. Finance professionals are in-house
consultants who collaborate with other functions to understand their workings
and end-to-end procedures, discover where both problems and opportunities
exist, identify where processes can be improved, and ultimately find
solutions. Digital transformation projects rely on harmonizing processes and
standardizing systems across different operations.
DevSecOps: Integrating Security Into the DevOps Lifecycle
The core of DevSecOps is ‘security as code’, a principle that dictates
embedding security into the software development process. To keep every
release tight on security, we weave those practices into the heart of our
CI/CD flow. Automation is key here, as it smooths out the whole security gig
in our dev process, ensuring we are safe from the get-go without slowing us
down. A shared responsibility model is another pillar of DevSecOps. Security
is no longer the sole domain of a separate security team but a shared concern
across all teams involved in the development lifecycle. Working together,
security isn’t just slapped on at the end but baked into every step from start
to finish. ... Adopting DevSecOps is not without its challenges. Shifting to
DevSecOps means we’ve got to knock down the walls that have long kept our
devs, ops and security folks in separate corners. Balancing the need for rapid
deployment with security considerations can be challenging. To nail DevSecOps,
teams must level up their skills through targeted training. Weaving together
seasoned systems with cutting-edge DevSecOps tactics calls for a sharp,
strategic approach.
Critical Android Vulnerability Impacting Millions of Pixel Devices Worldwide
This backdoor vulnerability, undetectable by standard security measures,
allows unauthorized remote code execution, enabling cybercriminals to
compromise devices without user intervention or knowledge due to the app’s
privileged system-level status and inability to be uninstalled. The
Showcase.apk application possesses excessive system-level privileges, enabling
it to fundamentally alter the phone’s operating system despite performing a
function that does not necessitate such high permissions. An application’s
configuration file retrieval lacks essential security measures, such as domain
verification, potentially exposing the device to unauthorized modifications
and malicious code execution through compromised configuration parameters. The
application suffers from multiple security
vulnerabilities. Insecure default variable initialization during certificate and signature
verification allows bypass of validation checks. Configuration file tampering
risks compromise, while the application’s reliance on bundled public keys,
signatures, and certificates creates a bypass vector for verification.
Using Artificial Intelligence in surgery and drug discovery
“We’re seeing how AI is adapting, learning, and starting to give us more
suggestions and even take on some independent tasks. This development is
particularly thrilling because it spans across diagnostics, therapeutics, and
theranostics—covering a wide range of medical areas. We’re on the brink of AI
and robotics merging together in a very meaningful way,” Dr Rao said. However,
he said he would like to add a word of caution. He said he often tells junior
enthusiasts who are eager to use AI in everything: AI is not a replacement for
natural stupidity. ... He said that one of the most impressive applications of
this AI was during the preparation of a US FDA application, which is typically
a very cumbersome and expensive process. “At that point, I’d already completed
the preclinical phase but wasn’t certain about the additional 20-30 tests I
might need. Instead of spending hundreds of thousands of dollars on trial and
error, we fed all our data into this AI system. Now, it’s important to note
that pharma companies are usually reluctant to share their proprietary data,
so gathering information is often a challenge,” he said.
Mastercard Is Betting on Crypto—But Not Stablecoins
“We’re opening up this crypto purchase power to our 100 million-plus
acceptance locations,” Raj Dhamodharan, Mastercard's head of crypto and
blockchain, told Decrypt. “If consumers want to buy into it, if they want to
be able to use it, we want to enable that—in a safe way.” Perhaps in the name
of safety, the new MetaMask Card isn’t compatible with most cryptocurrencies.
You can’t use it to buy a plane ticket with Pepecoin, or a sandwich with SHIB.
The card is only compatible with dominant stablecoins USDT and USDC, as well
as wrapped Ethereum. ... Dhamodharan and his team are currently endeavoring to
create an alternative system to stablecoins that—instead of putting crypto
companies like Circle and Tether in the catbird seat of the new digital
economy—keeps payment services like Mastercard, and traditional banks, at
center. Key to this plan is unlocking the potential of bank deposits, which
already exist on digital ledgers—just not ones that live on-chain. Dhamodharan
estimates that some $15 trillion worth of digital bank deposits currently
exist in the United States alone.
A Group Linked To Ransomhub Operation Employs EDR-Killing Tool
Experts believe RansomHub is a rebrand of the Knight ransomware. Knight, also
known as Cyclops 2.0, appeared in the threat landscape in May 2023. The
malware targets multiple platforms, including Windows, Linux, macOS, ESXi, and
Android. The operators used a double extortion model for their RaaS operation.
Knight ransomware-as-a-service operation shut down in February 2024, and the
malware’s source code was likely sold to the threat actor who relaunched the
RansomHub operation. ... “One main difference between the two ransomware
families is the commands run through cmd.exe. While the specific commands may
vary, they can be configured either when the payload is built or during
configuration. Despite the differences in commands, the sequence and method of
their execution relative to other operations remain the same.” states the
report published by Symantec. Although RansomHub only emerged in February
2024, it has rapidly grown and, over the past three months, has become the
fourth most prolific ransomware operator based on the number of publicly
claimed attacks.
Quote for the day:
"When your values are clear to you, making decisions becomes easier." --
Roy E. Disney
No comments:
Post a Comment