Encryption: Why security threats coast under the radar
This application of AI became a valuable source IT expertise that multiplied
staff bandwidth to manage the solution and allowed for a full and complex
monitoring of the entire networked environment. With Flowmon ADS in place, the
institute has a comprehensive, yet noise-free overview of suspicious behaviours
in the partner networks, flawless detection capability, and a platform for the
validation of indicators of compromise. Flowmon’s solution works at scale too.
GÉANT – which is a pan-European data network for the research and education
community – is one of the world’s largest data networks, and transfers over
1,000 terabytes of data per day over the GÉANT IP backbone. For something of
that scale there is simply no way to manually monitor the entire network for
aberrant data. With a redundant application of two Flowmon collectors deployed
in parallel, GÉANT was able to have a pilot security solution to manage data
flow of this scale live in just a few hours. With a few months of further
testing, integration and algorithmic learning, the solution was then ready to
protect GÉANT’s entire network from encrypted data threats.
In The Digital Skills Pipeline, A Shift Away From Traditional Hiring Modes
“As digital transformation accelerates and we experience generational shifts,
professionals will increasingly desire better work-life balance and freedom from
legacy in-office models,” says Saum Mathur, chief product, technology and AI
officer with Paro. “Consultancies and others that are reliant on legacy models
are struggling to adapt to this new reality, and marketplaces are only
furthering these models’ disruption. Three to five years ago, the gig economy
pioneers offered customers finite, task-based services that didn’t require
extensive experience and enabled flexible scheduling. With continued shifts in
the technical and cultural landscape, the gig economy has been extended into
professional services, which is powered by highly experienced subject matter
experts of all levels.” Corporate culture needs to be receptive to the changes
wrought by digital transformation. Forty-one percent of executives in the
Alliantgroup survey have encountered employee resistance, while 32$ say they
have had “the wrong team or department overseeing initiatives.”
Remote-working jobs: Disaster looms as managers refuse to listen
The Future Forum Pulse survey echoed a sentiment that has been voiced repeatedly
over the past 18 or so months: employees have embraced remote working, and see
it as a pillar of their future working preferences. Yet executives are more
likely than lower-level workers to be in favour of a working week based heavily
around an office. Of those surveyed, 44% of executives said they wanted to work
from the office every day, compared to just 17% of employees. Three-quarters
(75%) of executives said they wanted to work from the office 3-5 days a week,
versus 34% of employees. This disconnect between employer and employee
preferences risks being entrenched into new workplace policies, researchers
found. Two-thirds (66%) of executives reported they were designing post-pandemic
workforce plans with little to no direct input from employees – and yet 94% said
they were "moderately confident" that the policies they had created matched
employee expectations. What's more, more than half (56%) of executives reported
they had finalized their plans on how employees can work in the future.
Will the cloud eat your AI?
"CSPs' cloud and digital services have given them access to the enormous amounts
of data required to effectively train AI models," the authors concluded. Such
economies of scale have been an asset to the cloud providers for years. Years
ago, RedMonk analyst Stephen O'Grady highlighted the "relentless economies of
scale" that the cloud providers brought to hardware–they could simply build more
cheaply than any enterprise could hope to replicate in their own data centers.
Now the CSPs enjoy a similar advantage with data. But it's not merely a matter
of raw data. The CSPs also have more experience using that data on a large
scale. The CSPs have products (e.g., Amazon Alexa to assist with natural
language processing, or Google Search to help with recommendation systems). Lots
of data feeding ever-smarter applications feeding more data into the
applications... it's a self-reinforcing cycle. Oh, and that hardware mentioned
earlier? The CSPs also have more experience tuning hardware to process machine
learning workloads at scale.
Operationalizing machine learning in processes
Operationalizing ML is data-centric—the main challenge isn’t identifying a
sequence of steps to automate but finding quality data that the underlying
algorithms can analyze and learn from. This can often be a question of data
management and quality—for example, when companies have multiple legacy systems
and data are not rigorously cleaned and maintained across the organization.
However, even if a company has high-quality data, it may not be able to use the
data to train the ML model, particularly during the early stages of model
design. Typically, deployments span three distinct, and sequential,
environments: the developer environment, where systems are built and can be
easily modified; a test environment (also known as user-acceptance testing, or
UAT), where users can test system functionalities but the system can’t be
modified; and, finally, the production environment, where the system is live and
available at scale to end users.
MLOps essentials: four pillars for Machine Learning Operations on AWS
Managing code in Machine Learning appliances is a complex matter. Let’s see why!
Collaboration on model experiments among data scientists is not as easy as
sharing traditional code files: Jupyter Notebooks allow for writing and
executing code, resulting in more difficult git chores to keep code synchronized
between users, with frequent merge conflicts. Developers must code on different
sub-projects: ETL jobs, model logic, training and validation, inference logic,
and Infrastructure-as-Code templates. All of these separate projects must be
centrally managed and adequately versioned! For modern software applications,
there are many consolidated Version Control procedures like conventional commit,
feature branching, squash and rebase, and continuous integration. These
techniques however, are not always applicable to Jupyter Notebooks since, as
stated before, they are not simple text files. Data scientists need to try many
combinations of datasets, features, modeling techniques, algorithms, and
parameter configurations to find the solution which best extracts business
value.
Why Unsupervised Machine Learning is the Future of Cybersecurity
There are two types of Unsupervised Learning: discriminative models and
generative models. Discriminative models are only capable of telling you, if you
give it X then the consequence is Y. Whereas the generative model can tell you
the total probability that you’re going to see X and Y at the same time. So the
difference is as follows: the discriminative model assigns labels to inputs, and
has no predictive capability. If you gave it a different X that it has never
seen before it can’t tell what the Y is going to be because it simply hasn’t
learned that. With generative models, once you set it up and find the baseline
you can give it any input and ask it for an answer. Thus, it has predictive
ability – for example it can generate a possible network behavior that has never
been seen before. So let’s say some person sends a 30 megabyte file at noon,
what is the probability that he would do that? If you asked a discriminative
model whether this is normal, it would check to see if the person had ever sent
such a file at noon before… but only specifically at noon.
Sorry, Blockchains Aren’t Going to Fix the Internet’s Privacy Problem
Recently, a number of blockchain-based companies have sprung up with the vision
of helping people take control of their data. They get an enthusiastic reception
at conferences and from venture capitalists. As someone who cares deeply about
my privacy, I wish I thought they stood a better chance of success, but they
face many obstacles on the road ahead. Perhaps the biggest obstacle I see for
personal-data monetization businesses is that your personal information just
isn’t worth that much on its own. Data aggregation businesses run on a principle
that’s sometimes referred to as the “river of pennies.” Each individual user or
asset has nearly zero value, but multiply the number of users by millions and
suddenly you have something that looks valuable. That doesn’t work in the
reverse, however. Companies are far more focused and disciplined in the pursuit
of millions of dollars in ad or data revenue than one consumer trying to make
$25 a year. But why isn’t your data worth that much? Very simply, the world is
awash in your information, and you’re not the only source of that information.
The truth is that you leak information constantly in a digital ecosystem.
Iranian APT targets aerospace and telecom firms with stealthy ShellClient Trojan
The Trojan is created with an open-source tool called Costura that enables the
creation of self-contained compressed executables with no external dependencies.
This might also contribute to the program's stealthiness and to why it hasn't
been discovered and documented until now after three years of operation. Another
possible reason is that the group only used it against a small and carefully
selected pool of targets, even if across geographies. ShellClient has three
deployment modes controlled by execution arguments. One installs it as a system
service called nhdService (Network Hosts Detection Service) using the
InstallUtil.exe Windows tool. Another execution argument uses the Service
Control Manager (SCM) to create a reverse shell that communicates with a
configured Dropbox account. A third execution argument only executes the malware
as a regular process. This seems to be reserved for cases where attackers only
want to gather information about the system first, including which antivirus
programs are installed, and establish if it's worth deploying the malware in
persistence mode.
How financial services can invest in the future with predictive analytics
Predictive analytics empowers users to make better decisions that consider what
has happened and what is likely to happen based on the available data. And those
decisions can only be made if employees understand what they’re working with.
They need good data literacy competencies to understand, challenge, and take
actions based on the insights, with greater abilities to realise the limitations
and question the output of predictive analytics. After all, a forecast’s
accuracy depends on the data fuelling it, so its performance could be impacted
during an abnormal event or by intrinsic bias in the dataset. Employees must
have confidence in their understanding of the data to question its output. This
is particularly true when decisions could directly impact customers’ lives,
particularly the influential impact of those made in the financial sector – from
agreeing to an overdraft and making it to payday to approving a mortgage
application in time.
Quote for the day:
"All leadership takes place through the
communication of ideas to the minds of others." -- Charles Cooley
No comments:
Post a Comment