Internet of Cars: A driver-side primer on IoT implementation
There are millions of internet-connected cars already on the road, albeit mostly
with crude subscription services for music and weather apps. With further
advances, connection will be much more encompassing, with the average connected
car having up to 200 sensors installed, each recording a point of data, minute
by minute. The numbers quickly become staggering, and in emergency situations,
the need for data agility is apparent. Picture driving on a highway in medium
traffic. If someone’s tire blows out half a mile ahead, this information could
be quickly conveyed to surrounding cars, warning of the potential for emergency
braking. Any DLT solution would have to include a very nimble verification
process for all these new packets of information to be brought into and carried
by the network. Additionally, because of the computational complexity involved,
almost all DLTs today charge a fee for each new transaction brought into the
network. In fact, the fee is an integral part of the structure of many of these
computational models. This is obviously not going to be workable in a system
like urban traffic that would be generating billions of “transactions” every
day. The truth is that decentralized data networks were never designed to handle
these kinds of massive use-case scenarios.
AI vendors may have to prove systems don't discriminate
Providing proof that AI models are non-discriminatory means AI vendors would
have to become much more transparent about how AI models were trained and
developed, according to Purcell. "In the bill, it talks about the necessity of
understanding what the training data was that went into creating the model," he
said. "That's a big deal because today, a lot of AI vendors can just build a
model kind of in secret or in the shadows and then put it on the market. Unless
the model is being used for a highly regulated use case like credit
determination or something like that, very few people ask questions." That could
be easier for the biggest AI vendors, including Google and Microsoft, which have
invested heavily in explainable AI for years. Purcell said that investment in
transparency serves as a differentiator for them now. In general, bias in an AI
system largely results from the data the system is trained on. The model itself
"does not come with built-in discrimination, it comes as a blank canvas of sorts
that learns from and with you," said Alan Pelz-Sharpe, founder and principal
analyst at Deep Analysis. Yet, many vendors sell pre-trained models as a way to
save their clients the time and know-how it normally takes to train a model.
That's ordinarily uncontroversial if the model is used to, say, detect the
difference between an invoice and a purchase order, Pelz-Sharpe continued.
Microsoft releases Application Guard for Office to M365 customers
Application Guard for Office isolates certain files opened in the suite's three
primary applications: Word, Excel and PowerPoint. Documents obtained from
untrusted Internet or intranet domains, files pulled from potentially unsafe
areas, and attachments received through the Outlook email client, are opened in
a virtualized environment, or sandbox, where malicious code can't wreak havoc.
Unlike the much older Protected View, another Office defensive feature — it
opens potentially dangerous documents as read-only — files opened in Application
Guard can be manipulated. They can be printed, edited and saved. When saved,
they remain in the isolation container and when reopened later, again are
quarantined in the sandbox. Outdated file types — which can be set by
administrators in the File Block feature within Word, Excel and PowerPoint — are
also shunted into Application Guard's virtual machine. Application Guard for
Office will be available to customers licensing Microsoft 365 E5 or Microsoft
365 E5 Security, and for now, only to those on either the Current Channel or
Monthly Enterprise Channel. (Those are the Microsoft 365 update channels that
deliver the most frequent refreshes.)
Digital nomads and "bleisure" define the new high-tech take on work trips
Many organizations have adopted remote work policies amid a modern plague. While
some companies have brought telecommuters back to the traditional office, others
have many long-term commitments to remote work. Ernest Lee, managing director
development and investments, Americas, with citizenM hotels, similarly alluded
to remote work-enabled "nomadic behavior" among professionals. The company
recently announced a global passport; a subscription service allowing remote
workers with a penchant for frequent traveling the ability to stay in any of the
citizenM's 21 hotels around the globe. "We certainly think that this new sort of
lifestyle will attract a certain type of person that wants to also blend in
their personal interests and passions [with] not having to be tied down so much
to a fixed location," Lee said. The company also offers a corporate subscription
providing organizations with access to hotel rooms and meeting room spaces at a
fixed price. Lee explained that this package is designed for remote teams who
are no longer sharing "the same co-located space." To enhance the traditional
business travel experience, hotels are incorporating a wide range of
technologies, in-app features, Internet of Things (IoT) capabilities, and
more.
'Clone Firm' Fraudsters Stealing Millions From UK Investors
A clone firm is a fake entity created by fraudsters that uses the name, address
and Firm Reference Number - a unique identifier assigned to every financial or
investment firm in the U.K and issued by the Financial Conduct Authority - of a
legitimate organization, according to the alert. In some cases, the scammers
will clone or spoof the entire website of a legitimate firm. Once these fake and
spoofed websites are created, the fraudsters then send sales and marketing
materials to would-be investors that appear to originate from legitimate firms.
The scammers also advertise on social media, according to the alert. The
fraudsters use phishing emails and social engineering techniques to lure
victims, and their use of the legitimate sales materials gives the scheme a
sheen of authenticity. Once a connection is established, the fraudsters attempt
to get victims to send money to the cloned firm, the NCA notes. "Fraudsters use
literature and websites that mirror those of legitimate firms, as well as
encouraging investors to check the Firm Reference Number on the FCA Register to
sound as convincing as possible," says Mark Steward, executive director of
enforcement and market oversight for the Financial Conduct Authority.
DDoS Attacks Reach Over 10 Million in 2020
Richard Hummel, threat intelligence lead at NETSCOUT, said, “It is no
coincidence that this milestone number of global attacks comes at a time when
businesses have relied so heavily on online services to survive. Threat actors
have focused their efforts on targeting crucial online platforms and services
such as healthcare, education, financial services and e-commerce that we all
rely on in our daily lives. As the COVID-19 pandemic continues to present
challenges to businesses and societies around the world, it is imperative that
defenders and security professionals remain vigilant to protect the critical
infrastructure that connects and enables the modern world.” DDoS attack count,
bandwidth, and throughput all saw significant increases since the start of the
global COVID-19 pandemic. For instance, attack frequency rose 20% year over
year, but that includes the pre-pandemic months of January, February, and most
of March. For the second half of 2020, which was entirely pandemic-ridden,
attacks rose 22% year over year. As cybercriminals quickly exploited
pandemic-driven opportunities, we saw another kind of ‘new normal.’ Monthly DDoS
attacks regularly exceeded 800,000 starting in March, as the pandemic lockdown
took effect.
IoT at the edge: magic won’t happen automatically
Creating more value at the edge Dheeraj Remella, Chief Product Officer at
VoltDB, notes the uncertainty around many edge and IoT business cases. He
argues, “Telcos spend a lot of time talking about moving up the value chain
beyond connectivity, and this is a great opportunity. Differentiation is based
on sets of complementary features, contributed by an ecosystem, that create
capabilities rather than individual features, which as stand-alones are not
compelling. The owner of the platform that delivers that joint capability holds
the keys to the digital kingdom.” As Remella points out, decisioning at
low-millisecond speed is one thing on a private network within an industrial
plant, but another ball game when the edge is hugely distributed, such as a wind
farm over hundreds or thousands of acres, or for smart agriculture or an
electricity grid. He says that often, to cut down processing times at the edge,
companies take what he calls a “hyper-contextualised” approach – automating
decisions based on data about a single entity or an isolated set of events. This
limits its usefulness, just making existing processes digital (digitising),
rather than using advances in technology to do things we’ve never been able to
do before (digitalising), which means doing thing differently – changing
processes.
Sorry, Data Lakes Are Not “Legacy”
From a technical perspective, compute, and storage is intended to be loosely
coupled architecture. As a result, this is a benefit for warehouses. However,
the benefit is not just for warehouses. Any modern data architecture, by design,
depends on a loosely coupled separation of compute and storage to deliver an
efficient, scalable, and flexible solution. The fact that data warehouse vendors
are introducing separate compute and storage is not innovation compared to data
lakes; it is achieving parity with data lakes. The evolution of separate compute
and storage in warehouses brings them in line with the architecture employed by
productive data lakes via on-demand SQL query services. In a post called When to
Adopt a Data Lake — and When Not to, a dig at data lakes was that they could not
scale compute easily or on-demand; Some solutions architects have proposed data
lakes to “separate compute from storage” in a traditional data warehouse. But
they’re missing the point: You want the ability to scale compute easily and
on-demand. A data lake isn’t going to give you this; what you need is a data
warehouse that can provision and suspend capacity whenever you need it.
AI, machine learning effective in cyber defence, but can also present challenges
"Antivirus technology, for example, operates a strict ‘yes or no’ policy as to
whether a file is potentially malicious or not. It’s not subjective, through a
strict level of parameters, something is either considered a threat, or not." he
says. "The AI can quickly determine whether it’s going to crash the device, lock
the machine, take down the network and as such, it is either removed or allowed.
"It is important to note that VIPRE uses AI and ML as key components in their
email and endpoint security services for example as part of their email security
attachment sandboxing solution where an email attachment is opened and tested by
AI in an isolated environment away from a customer’s network," Paterson adds.
"So while AI might not be an ideal method for preventing accidental data leakage
through email, it does have an important part to play in specific areas such as
virus detection, sandboxing and threat analysis." Paterson says with so much
reliance on email within business practices, accidental data leakage is an
inevitable risk. "The implications of reputational impact, compliance breach and
associated financial damage can be devastating. A cyber-aware culture with
continuous training is essential, and so is the right technology," he says.
Does CI/CD impact telecom operations?
In the standard microservice code model that underpins cloud-native software,
every time a common code software component is improved, it will change all
network systems that use that standard code. This approach can bring
lightning-fast agility and innovation but leaves today's legacy bi-annual
software test and validate processes entirely unfit for purpose. The telecom
CI/CD philosophy means that software is developed, delivered, tested, accepted,
and brought into operation incrementally at a far higher cadence than previously
in a traditional service provider environment. Further, it creates a significant
software development volume that needs validation on an increasingly dynamic
network. This approach implies that continuous software validation and
continuous testing must accompany continuous software delivery and deployment.
These requirements demand a new agile way of working between the network
operator, its software suppliers, and vendors. Essentially, the merging of Dev
and Ops as in the IT world is now a must for the telecom context where the 'Dev'
from vendors needs to seamlessly merge and receive feedback from the 'Ops' on
the operator side of the firewall. This evolution requires a transformation on
both the vendor side as well as the operator side.
Quote for the day:
"Entrepreneurship is the last refuge of
the trouble making individual." -- Natalie Barney
No comments:
Post a Comment