You don't really own your phone
When you purchase a phone, you own the physical parts you can hold in your hand.
The display is yours. The chip inside is yours. The camera lenses and sensors
are yours to keep forever and ever. But none of this, not a single piece, is
worth more than its value in scrap without the parts you don't own but are
graciously allowed to use — the copyrighted software and firmware that powers it
all. The companies that hold these copyrights may not care how you use the
product you paid a license for, and you don't hear a lot about them outside of
the right to repair movement. Xiaomi, like Google and all the other copyright
holders who provide the things which make a smartphone smart, really only wants
you to enjoy the product enough to buy from them the next time you purchase a
smart device. Xiaomi pissing off people who buy its smartphones isn't a good way
to get those same people to buy another or buy a fitness band or robot vacuum
cleaner. When you set up a new phone, you agree with these copyright holders
that you'll use the software on their terms.
Edge computing has a bright future, even if nobody's sure quite what that looks like
Edge computing needs scalable, flexible networking. Even if a particular
deployment is stable in size and resource requirements over a long period, to
be economic it must be built from general-purpose tools and techniques that
can cope with a wide variety of demands. To that end, software defined
networking (SDN) has become a focus for future edge developments, although a
range of recent research has identified areas where it doesn't yet quite match
up to the job. SDN's characteristic approach is to divide the task of
networking into two tasks of control and data transfer. It has a control plane
and a data plane, with the former managing the latter by dynamic
reconfiguration based on a combination of rules and monitoring. This looks
like a good match for edge computing, but SDN typically has a centralised
control plane that expects a global view of all network activity. ... Various
approaches – multiple control planes, increased intelligence in edge switch
hardware, dynamic network partitioning on demand, geography and flow control –
are under investigation, as are the interactions between security and SDN in
edge management.
TangleBot Malware Reaches Deep into Android Device Functions
In propagation and theme, TangleBot resembles other mobile malware, such as
the FluBot SMS malware that targets the U.K. and Europe or the CovidLock
Android ransomware, which is an Android app that pretends to give users a way
to find nearby COVID-19 patients. But its wide-ranging access to mobile device
functions is what sets it apart, Cloudmark researchers said. “The malware has
been given the moniker TangleBot because of its many levels of obfuscation and
control over a myriad of entangled device functions, including contacts, SMS
and phone capabilities, call logs, internet access, [GPS], and camera and
microphone,” they noted in a Thursday writeup. To reach such a long arm into
Android’s internal business, TangleBot grants itself privileges to access and
control all of the above, researchers said, meaning that the cyberattackers
would now have carte blanche to mount attacks with a staggering array of
goals. For instance, attackers can manipulate the incoming voice call function
to block calls and can also silently make calls in the background, with users
none the wiser.
Why CEOs Should Absolutely Concern Themselves With Cloud Security
Probably the biggest reason cybersecurity needs to be elevated to one of your
top responsibilities is simply that, as the CEO, you call most of the shots
surrounding how the business is going to operate. To lead anyone else, you
have to have a crystal-clear big picture of how everything interconnects and
what ramifications threats in one area have to other areas. Additionally, it’s
up to you to hire and oversee people who truly understand servers and cloud
security and who can build a secure infrastructure and applications. That
said, virtually all businesses today are “digital” businesses in some sense,
if that means having a website, an app, processing credit cards with point of
sale readers or using the ‘net for your social media marketing. All of these
things can be potential points of entry for hackers, who happily take
advantage of any vulnerability they can find. And with more people working
remotely and generally enjoying a more mobile lifestyle, the risks of cloud
computing are here to stay.
Better Incident Management Requires More than Just Data
To the uninitiated, all complexity looks like chaos. Real order requires
understanding. Real understanding requires context. I’ve seen teams all over
the tech world abuse data and metrics because they don’t relate it to its
larger context: what are we trying to solve and how might we be fooling
ourselves to reinforce our own biases? In no place is this more true in the
world of incident management. Things go wrong in businesses, large and small,
every single day. Those failures often go unreported, as most people see
failure through the lens of blame, and no one wants to admit they made a
mistake. Because of that fact, site reliability engineering (SRE) teams
establishing their own incident management process often invest in the wrong
initial metrics. Many teams are overly concerned with reducing MTTR: mean time
to resolution. Like the British government, those teams are overly relying on
their metrics and not considering the larger context. Incidents are almost
always going to be underreported initially: people don’t want to admit things
are going wrong.
Three Skills You’ll Need as a Senior Data Scientist
In the light of data science, I would say, critical thinking is, answering the
“why”s in your data science project. Before elaborating what I mean, the most
important prerequisite is, know the general flow of a data science project.
The diagram below shows that. This is a slightly different view to the cyclic
series of steps you might see elsewhere. I think this is a more realistic view
than seeing it as a cycle. Now off to elaborating. In a data science project,
there are countless decisions you have to make; supervised vs unsupervised
learning, selecting raw fields of data, feature engineering techniques,
selecting the model, evaluation metrics, etc. Some of these decisions would be
obvious, like, if you have a set of features, and a label associated with it,
you’d go with supervised learning instead of unsupervised learning. A
seemingly tiny checkpoint you overlooked might be enough. And it can cost
money for the company and put your reputation on the line. When you answer not
just “what you’re doing”, but also “why you’re doing”, it closes down most of
the cracks, where problems like above can seep in.
The Benefits and Challenges of Passwordless Authentication
Passwordless authentication is a process that verifies a user's identity with
something other than a password. It strengthens security by eliminating password
management practices and the risk of threat vectors. It is an emerging subfield
of identity and access management and will revolutionize the way employees work.
... asswordless authentication uses some modern authentication methods that
reduce the risk of being targeted via phishing attacks. With this approach,
employees won't need to provide any sensitive information to the threat actors
that give them access to their accounts or other confidential data when they
receive a phishing email. ... Passwordless authentication appears to be a secure
and easy-to-use approach, but there are challenges in its deployment. The most
significant issue is the budget and migration complexity. While setting up a
budget for passwordless authentication, enterprises should include costs for
buying hardware and its setup and configuration. Another challenge is dealing
with old-school mentalities. Most IT leaders and employees are reluctant to move
away from traditional security methods and try new ones.
Using CodeQL to detect client-side vulnerabilities in web applications
The idea of CodeQL is to treat source code as a database which can be queried
using SQL-like statements. There are lots of languages supported among which is
JavaScript. For JavaScript both server-side and client-side flavours are
supported. JS CodeQL understands modern editions such as ES6 as well as
frameworks like React (with JSX) and Angular. CodeQL is not just grep as it
supports taint tracking which allows you to test if a given user input (a
source) can reach a vulnerable function (a sink). This is especially useful when
dealing with DOM-based Cross Site Scripting vulnerabilities. By tainting a
user-supplied DOM property such as location.hash one can test if this value
actually reaches one of the XSS sinks, e.g. document.innerHTML or
document.write(). The common use-case for CodeQL is to run a query suite against
open-source code repositories. To do so you may install CodeQL locally or use
https://lgtm.com/. For the latter case you should specify a GitHub repository
URL and add it as your project.
Moving beyond agile to become a software innovator
Experience design is a specific capability focused on understanding user
preferences and usage patterns and creating experiences that delight them. The
value of experience design is well established, with organizations that have
invested in design exceeding industry peers by as much as 5 percent per year in
growth of shareholder return. What differentiates best-in-class organizations is
that they embed design in every aspect of the product or service development. As
a core part of the agile team, experience designers participate in development
processes by, for example, driving dedicated design sprints and ensuring that
core product artifacts, such as personas and customer journeys, are created and
used throughout product development. This commitment leads to greater adoption
of the products or services created, simpler applications and experiences, and a
substantial reduction of low-value features. ... Rather than approaching it
as a technical issue, the team focused on addressing the full onboarding
journey, including workflow, connectivity, and user communications. The results
were impressive. The team created a market-leading experience that enabled their
first multimillion-dollar sale only four months after it was launched and
continued to accelerate sales and increase customer satisfaction.
The relationship between data SLAs & data products
The data-as-a-product model intends to mend the gap that the data lake left
open. In this philosophy, company data is viewed as a product that will be
consumed by internal and external stakeholders. The data team’s role is to
provide that data to the company in ways that promote efficiency, good user
experience, and good decision making. As such, the data providers and data
consumers need to work together to answer the questions put forward above.
Coming to an agreement on those terms and spelling it out is called a data SLA.
An SLA stands for a service-level agreement. An SLA is a contract between two
parties that defines and measures the level of service a given vendor or product
will deliver as well as remedies if they fail to deliver. They are an attempt to
define expectations of the level of service and quality between providers and
consumers. They’re very common when an organization is offering a product or
service to an external customer or stakeholder, but they can also be used
between internal teams within an organization.
Quote for the day:
"If you can't handle others'
disapproval, then leadership isn't for you." --
Miles Anthony Smith
No comments:
Post a Comment