The work-from-home employee’s bill of rights
Keeping business and personal data separate is straightforward for most cloud
services, so legitimate security concerns can be addressed in such hybrid
environments. Only in areas where IT cannot reasonably ensure security may
businesses disallow specific optional technologies or hybrid usage. (The
employee should be made aware that in such mixed-usage cases that, should
there ever be a legal proceeding, their personal devices used for work could
be subject to discovery and thus be taken during the course of an
investigation.) IT also must allow the use of personal services in such
mixed-usage environments, such as allowing users to use personal Slack, Zoom,
or Skype accounts for personal communications rather than blocking such
software to force the use of a corporate standard. Instead, managers would
enforce the use of corporate-standard technology for business purposes, not IT
through technology barriers. The basic principle should be that employees can
bring their own technology into the mix unless it creates a clear security
issue — and not a theoretical one, since IT too often cites security as an
easy reason to say no to employee requests despite any real evidence of a
risk.
Artificial Intelligence Collaboration in Asia’s Security Landscape
Though the field of AI – a catchall term for a set of technologies that enable
machines to perform tasks that require human-like capabilities – has been
around for decades, interest in it has surged over the past few years,
including across the Asia-Pacific, with individual countries beginning to
develop their own national approaches and multilateral groupings such as the
OECD formulating guidance such as principles on AI. In the security realm more
specifically, AI is emerging as a key topic for defense policymakers and
communities alike in a range of areas, from assessments of its impact on
geopolitical competition to areas of potential collaboration between some
Indo-Pacific partners and their expert communities. It has also been a topic
of discussion among scholars and policymakers in annual Asian security fora
such as the Shangri-La Dialogue and the Xiangshan Forum. Seen from this
perspective, Mohamad’s highlighting of AI as an area of focus for Asian
defense establishments was very much in keeping with these trends. As he noted
in his keynote address, AI represents an emerging domain where armed forces
and defense establishments can play a key role in efforts to “strengthen the
international order and enhance practical cooperation” by promoting
responsible state behavior, building confidence, and fostering international
stability.
Is neuroscience the key to protecting AI from adversarial attacks?
For the new research, Cox and DiCarlo joined Joel Dapello and Tiago Marques,
the lead authors of the paper, to see if neural networks became more robust to
adversarial attacks when their activations were similar to brain activity. The
AI researchers tested several popular CNN architectures trained on the
ImageNet dataset, including AlexNet, VGG, and different variations of ResNet.
They also included some deep learning models that had undergone “adversarial
training,” a process in which a neural network is trained on adversarial
examples to avoid misclassifying them. The scientist evaluated the AI models
using the BrainScore metric, which compares activations in deep neural
networks and neural responses in the brain. They then measured the robustness
of each model by testing it against white-box adversarial attacks, where an
attacker has full knowledge of the structure and parameters of the target
neural networks. “To our surprise, the more brainlike a model was, the more
robust the system was against adversarial attacks,” Cox says. “Inspired by
this, we asked if it was possible to improve robustness (including adversarial
robustness) by adding a more faithful simulation of the early visual cortex —
based on neuroscience experiments — to the input stage of the network.”
Speed Limits in Software Development
For software development there aren’t road signs telling us a safe speed to
deploy at, but perhaps we can extend the driving metaphor a bit more to help us
think this through. One thing that relates to safe speed is responsiveness. A
slick road makes it harder for your car to respond to changes in direction, and
slow deployment makes it hard to respond to problems with your application. How
easy is it to respond to issue in your application? Don’t forget that an F1 race
car with new tires and perfect tuning can respond a lot better than the little
commuter car you might have. We can tune our code and deployments and get better
at responsiveness over time. If the road is foggy and you can’t see where you
are going when you drive, I hope you slow down. If you can’t see what is going
in your application and understand how it is being used, I hope you slow down.
... So how fast can we go in software development? Well, in the ideal case
if we know everything and have a smooth path ahead of us, pretty fast. I don’t
think we can get to a land speed record since software development doesn’t often
involve going in a straight line, but with a bit of work on the code and
deployment process and with investment in observability and operations, I think
we can go pretty fast, pretty safely. Just be careful.
Why the brain will always win in the battle against AI
What we call ‘intelligence’ is an activity of the brain. The outcome of that
activity forms our ‘mind’ about things. Even when we sleep, our intelligence
is awake and our mind is being formed. In this context, we must pay attention
to the concept of duality as the first level of multivariate analysis. A
hallmark of intelligence is the willingness to change one's mind. Humans can
think in terms of ranges, options and spectral possibilities. Machines are
only about specificity and exactness. Computing doesn’t entertain opinion.
Yet, calculation is merely one aspect of our mental ability. It has been
exaggerated in our education system. This kind of logic-based intelligence is
quite self-conscious. We are assessed for deductive ability. We are tutored to
think and know but not trained to ‘think about thinking’ or ‘know about not
knowing’. We are barely taught any self-awareness. Emotional Intelligence is
neglected. We are coached in analytical hindsight and acquire a punter’s
foresight based on the computation of odds. No one educates us on esteem,
gratification, empathy, or seduction. We learn these things by ourselves. The
irony is that machines have beaten us on all those aspects that we acquire via
structured learning and tutoring. It is in the emotional, subjective and
artistic areas that mankind holds the advantage.
Data bill: The security vs privacy debate
Encryption is widely acknowledged as the strongest feature of data protection.
Digital banking and financial transactions have increased manifold with the
Reserve Bank of India prescribing the encryption standards. The telecom
sector, however, is limping along on 40-bit key encryption, which is
considered to be low. Both cellular voice and messaging are vulnerable to
off-air interceptions, with experts pointing at the weakness of SMS being used
as second factor authentication in banking, payments and Aadhaar
identification. The Telecom Regulatory Authority of India has rightly
recommended an update of regulation policy and is of the view that encryption
is a reliable tool which should not be interfered with. The end-to-end
encryption on chat platforms is the most secure method of keeping data safe
from hackers and break-ins. The General Data Protection Regulation of the
European Union strongly favours use of encryption for protecting individual
data. However, security agencies around the world want decrypted data and
favour legislation in this regard. The United States, United Kingdom and
Australia support a legislation for decryption, while France and Germany are
pro-encryption.
New motto for CIOs: Move even faster and make sure nothing breaks
The stakes are high for IT professionals running digital transformation
projects with consequences ranging from missed bonuses to going out of
business, according to a new survey. The current motto for survival is "Move
even faster and make sure nothing breaks," IT leaders told Kong in the
company's 2021 Digital Innovation Benchmark report. Sixty-two percent of tech
leaders said they are at risk of being replaced by competitors who innovate
more quickly, according to the survey. Also, 51% of respondents said they will
survive only three years before being acquired or simply going out of business
if they can't evolve fast enough. That number goes up to 84% when the
make-or-break timeline extends to six years. This number is up from 71% in
last year's survey. ... The survey reinforces what many companies realized at
the end of 2020: The pandemic accelerated digital transformation in general
and cloud migrations in particular. Almost 40% of tech leaders in the US and
Europe said that their companies also implemented microservices sooner than
expected due to the pandemic. A majority of respondents (87%) said that
microservice-based applications, distributed applications, and open source
software are the future of IT architecture.
Reflect brings automated no-code web testing to the cloud
Every company is now a software company, or so we’re told, meaning they have to
employ designers and developers capable of building websites and apps. In
tandem, the much-reported software developer shortage means companies across the
spectrum are in a constant battle for top talent. This is opening the doors to
more automated tools that democratize some of the processes involved in shipping
software, while freeing developers to work on other mission-critical tasks. It’s
against this backdrop that Reflect has come to market, serving as an automated,
end-to-end testing platform that allows businesses to test web apps from an end
user’s perspective, identifying glitches before they go live. Founded out of
Philadelphia in 2019, the Y Combinator (YC) alum today announced a $1.8 million
seed round of funding led by Battery Ventures and Craft Ventures, as it looks to
take on incumbents with a slightly different proposition. Similar to others in
the space, Reflect hooks into the various elements of a browser so it can
capture actions the user is taking, including scrolls, taps, clicks, hovers,
field entry, and so on. This can be replicated later as part of an automated
test to monitor the new user signup flow for a SaaS app, for example.
Immigration exemption in data protection law faces further legal challenge
Speaking to Computer Weekly about the appeal, Scotland director at ORG Matthew
Rice said the exemption, which is the first derogation of its kind in 20 years
of UK data protection law, has been justified by the UK government on the
grounds it needs to “stop people from learning that they’re about to be
removed from the country” and consequently absconding. “There was no evidence
to suggest that under previous data protection law…people were making subject
access requests [SARs], getting back that they were due to get a visit from
the immigration services, and then running away,” he said. “The other thing to
bear in mind is that the exemption is blunt because immigration control isn’t
defined in the act or in any part of UK law, and it’s not just about the Home
Office or borders. Any data controller can apply this exemption – it’s
available to your doctor, your landlord, your school, your local authority,
any number of persons that might hold personal data about you.” ... The
non-disclosure of personal data under the immigration exemption therefore not
only interferes with the individual’s access rights, but a host of other
digital rights granted by the GDPR as well, including the rights to
rectification, erasure and restriction of processing.
How security pros can prepare for a tsunami of new financial industry regs in 2021
Biometrics can add an extra layer of security when unlocking a smartphone
using a person’s face or fingerprint. But other technologies have raised
privacy concerns among consumers, such as law enforcement leveraging facial
recognition to identify wanted criminals via security cameras in a public
space. This has led to outright bans of facial recognition technology in
several cities, including Boston, San Francisco, Oakland, Portland, Oregon and
Portland, Maine, to name a few. As these technologies become mainstream, we’ll
need regulations to retain (or in some cases, regain) the trust of consumers
and policymakers. As a step forward, we see international organizations push
for global standards around the use of biometrics, for example, the FIDO
Alliance and the Financial Action Task Force (FATF), which recently issued
guidance on how to apply a risk-based approach to using digital identity
systems for customer identification and verification. However, the U.S. lags
behind other regions, which have been more progressive in their adoption of
regulations, such as the General Data Protection Regulation (GDPR) in Europe.
In lieu of federal standards, states such as California have implemented their
own regulations, such as the California Consumer Protection Act (CCPA) and its
upgrade, the California Privacy Rights Act (CPRA).
Quote for the day:
"The first step of any project is to grossly underestimate its complexity and difficulty." -- Nicoll Hunt
No comments:
Post a Comment