Edge computing’s epic turf war
Edge computing and the IoT — not to mention COVID-19-related production and
supply chain disruptions — are already blurring the lines separating the two
cultures. IoT devices bring a new level of monitoring and in some cases
control over OT systems. Plus, the edge deployments to which those IoT devices
connect promise a whole new arsenal of analytics capabilities to crunch the
massive amounts of data being produced by OT equipment. Yet many OT
organizations see edge computing as duplicative and even potentially harmful.
“Selling that value proposition [to OT] is very challenging,” says Jonathan
Lang, IDC research manager for its Worldwide IT/OT Convergence Strategies
research practice. “They already have legacy wired connections and industrial
networking capabilities and SCADA [supervisory control and data acquisition]
systems and control systems that serve their needs just fine.” OT leaders also
believe that integrating new systems could threaten throughput and
reliability, he says. Production requirements are changing rapidly, and
equipment may need to be changed very quickly. “When IT starts meddling with
their equipment, that translates into a loss in productivity,” Lang says.
Use cases for AI while remote working
“Working from home presents a host of challenges; lower quality equipment,
shared working spaces and limited available bandwidth to name a few. Many of
these challenges mean that the quality of our audio and video communication is
less than ideal. AI is regularly applied to improve the quality of our video
in real time and automatically filter out distracting background noises.
“Remote working can increase the feeling of being isolated or disconnected
from workmates, lead to disengagement and even stress and ill health. Through
sentiment and interaction analysis of everyday tools like email, messenger and
other collaboration applications, AI systems can provide effective means of
measuring employee wellbeing and engagement, highlighting where employees need
help. Once problems are identified, AI systems can suggest support resources
and activities that can help to make things better.” Lastly, an emerging
AI-related method of maintaining operations during remote working has been the
use of digital workers. “Digital workers are becoming essential for
businesses, optimising something we’ve never been able to before: the
bandwidth of employees,” said Ivan Yamshchikov, AI evangelist at ABBYY.
Eliminating Unconscious Bias in Tech Recruitment
First, it is important to realize that where you hire from is as important
as how you hire. If you only ever recruit from the same schools or with the
same websites, then you are leaving yourself at the mercy of the diversity
of those institutions. If you only ever use one route to apply, whether
that’s only using recruitment consultants or use a website that is only
really accessible through a computer, then again you are limiting yourself
to candidates with access to those pathways. Outside of reconsidering talent
pools, one way to reduce unconscious bias is through the removal of
identifying characteristics—no photos, no names, dates of birth or school
and university grades. This can help to an extent, in that decision-makers
have to go off the candidates’ experience and how they’ve presented their
previous roles, but it does have its restrictions—with entry-level
positions, for example, where relevant experience might be limited. That’s a
quick-fix solution. For a more sustainable, thorough and long-term approach,
recruiters—particularly those hiring for technology positions—need to look
at how they can accurately judge skill sets. This can be challenging in
itself—in larger organizations, those involved in early stage selection may
not have the requisite background understanding to make the right choices.
Ad Fraud: The Multibillion-Dollar Cybercrime CISOs Might Overlook
The practice became significantly more widespread when the scammers began
leveraging networked bots to create fake clicks on sites they own or ads
they've paid for, and now also encompasses hidden ads, targeting ad networks
which measure views not clicks; click hijacking, when the fraudster
redirects a click from one ad to another; and fake apps, which look like and
are labeled as legitimate apps. These techniques are often used
simultaneously to victimize companies, making the fight against ad fraud
even more complex, says Luke Taylor, the chief operating officer of adtech
security company TrafficGuard, which coauthored the report with Juniper. So
Tayler believes that at the very least, CISOs should use lessons from the
cybersecurity world to encourage their employers to become more engaged with
the ad fraud challenge. A lot of ad fraud is based on making fake traffic
look real, and the way that fraudsters do that is by stealing traffic logs
to mimic them and create authentic-looking but fake traffic. CISOs, Taylor
says, should be protecting their logs from cybercriminals the way they
protect financial data.
Tech Conference Diversity: Time to Get Real
Some tech conferences provide non-traditional amenities such as sessions
geared towards women (72%), a mothers' room (56%), a conference hosted
meetup (28%), on-site daycare (17%) or a childcare stipend (11%). According
to Classon, childcare stipends tend to be offered to people to encourage
attendance, although they should also be offered to speakers who have not
been offered a speaker stipend. "Part of the industry's problem is that
organizers look at providing amenities, like a designated mothers' room or
childcare stipend, as an extra bonus of their event when these should have
been looked at as table stakes to level the playing field for more women,"
said Classon. "The same argument can be made for religious observances and
the need for designated spaces for worship at weeklong or multi-day
conferences." Tech conferences also tend to suffer from design bias as
evidenced by the use of stools and chairs on stage that can make wearing a
dress or skirt uncomfortable for the speaker and the audience. "Replacing
bar stools with chairs that are lower to the ground makes it more
comfortable for everybody, frankly," said Classon. "Organizers should also
consider swapping out the common clip-on microphone that is difficult to
attach to women's clothing for a headset that can rest behind the speaker's
ear."
Portland becomes first city to ban companies from using facial recognition software in public
"This is the first of its kind legislation in the nation, and I believe in
the world. This is truly a historic day for the city of
Portland." Debate over facial recognition software continues to rage
after a summer of high-profile news related to the technology. Amazon, IBM,
and other major tech companies decided in June to put a moratorium on all
police department use of facial recognition software after years of studies
showing almost all of the available tools have high error rates and
specifically cannot identify the faces of people with darker
skin. Later that same month, the ACLU revealed that it was representing
a man from Detroit who was arrested in front of his wife and kids based on a
mistake by the Detroit Police Department's facial recognition software. Just
a day later, US Senators Ed Markey and Jeff Merkley announced the Facial
Recognition and Biometric Technology Moratorium Act, which they said
resulted from a growing body of research that "points to systematic
inaccuracy and bias issues in biometric technologies which pose
disproportionate risks to non-white individuals." As with the other recent
developments related to facial recognition, experts both praised and
criticized the Portland ban.
Wi-Fi 6 explained: Speed, range, latency, frequency and security
Wi-Fi 6 technology enables the fastest wireless networks to date, with
theoretical maximum speeds of around 10 Gbps versus Wi-Fi 5 Wave 2's peak
data rates of around 7 Gbps. Experts caution, however, that max wireless
speeds are typically unattainable except in perfect, laboratory-like
conditions. That's why Wi-Fi 5 -- advertised as "gigabit wireless" -- failed
to smash the gigabit barrier in actual practice, according to Zeus
Kerravala, founder and principal analyst of ZK Research. "If you were
sitting at your desk by yourself and no one else was attached to the
network, you might have gotten a gigabit of connectivity from Wave 2,"
Kerravala said. "But I haven't talked to any company that did." He expects
Wi-Fi 6 will be the first standard to consistently exceed 1 Gbps in
real-world implementations. But more important than its raw increase in
throughput, experts agreed, is Wi-Fi 6 technology's efficiency gains, which
result in higher capacity and lower latency overall. "Wi-Fi 6 is not as much
about getting better device performance," independent analyst John Fruehe
said. "It really does a better job of managing larger numbers of clients at
the router level."
Open Source Security's Top Threat and What To Do About It
It was easier for organizations to understand and control their use of open
source software 10 or 20 years ago, when a smaller pool of commercial open
source vendors licensed their software to customers, understood everything
about the code, and handled security patching. Now, however, developers draw
from a massive array of smaller projects they find on GitHub or share with
each other. That, after all, is the beautiful thing about open source —
developers no longer have to struggle with bad tools or reinvent software
wheels when they can easily benefit from the community's freely available
contributions to tackle just about any development need. When they do so,
however, they seldom examine what's under the hood — the source code and its
dependencies. Can they really trust the code? Does the party who created
it stand ready to pinpoint and disclose any security flaws? Is there even
someone to contact? A single application can have 10 runtimes and 100 other
packages. How can you be confident all are up to date from a security
perspective? This fragmentation is the No. 1 open source security threat for
enterprises, and it may help explain why Common Vulnerabilities
Why problem solving using analytics needs new thinking
There are parallels for this evolution. There was a time when building a
website meant learning to write extensive lines of code. This eventually
evolved to a partial self-service model via open-source software, and now
the prevalence of simple drag-and-drop features allow anyone with an idea to
create a personalised website. As with the development of web design, APA
platforms now allow users to get to the creative stage – or the ‘thinking
stage’ – sooner. It leapfrogs the mundane tasks of sourcing, cleaning and
organising data. The equivalent of web design’s user-friendly drag-and-drop
features are the hundreds of building blocks that jump-start the process of
creating useful analytic models. Through a unified method of managing data
analytics, automating business processes and elevating employees to spend
their time on more strategic solving, APA reshapes the way companies
generate data-driven insights and act on them. This enables upskilled
employees in all parts of the business to ask hard questions and obtain
swift answers without always relying upon the advanced skills of data
experts. By replacing a range of cumbersome point solutions with one
platform that sits across the entire analytic journey, APA also enables
anyone in any organisation to build predictive models and use predictive
data analytics to drive quick wins.
From Monolith to Event-Driven: Finding Seams in Your Future Architecture
The CQRS pattern strongly suggests that it is about the segregation of
commands and queries, but realizing that there is a difference between 1)
asking for the state of a system and 2) asking the system to change its state
is more fundamental than the separation itself. In fact, you’ll find many
variants of CQRS implementations ranging from logical to
physical. Combining EDA with the CQRS pattern is a natural increment of
the system’s design because commands are the generators of events. With CQRS
and commands, the migration of data during the transitional state of an
architecture provides a seam by which, once the migration is over, can be
removed. This seam will be covered in more details in the Data Migration Seams
section. Event sourcing a system means the treatment of events as the source
of truth. In principle, until an event is made durable within the system, it
cannot be processed any further. Just like an author’s story is not a story at
all until it’s written, an event should not be projected, replayed, published
or otherwise processed until it’s durable enough such as being persisted to a
data store. Other designs where the event is secondary cannot rightfully claim
to be event sourced but instead merely an event-logging system.
Quote for the day:
"Leadership does not always wear the harness of compromise." -- Woodrow Wilson
No comments:
Post a Comment