4 fundamental practices in IoT software development
One of the greatest concerns in IoT is security, and how software engineers
address it will play a deeper role. As devices interact with each other,
businesses need to be able to securely handle the data deluge. There have
already been many data breaches where smart devices have been the target,
notably Osram, which was found to have vulnerabilities in its IoT lightbulbs,
potentially gifting an attacker access to a user’s network and the devices
connected to it. Security needs to be tackled at the start of the design phase,
making requirement tradeoffs as needed, rather than adding as a mere ‘bolt on’.
This is highly correlated to software robustness. It may take a little bit more
time to design and build robust software upfront, but secure software is more
reliable and easier to maintain in the long run. A study by CAST suggests that
one-third of security problems are also robustness problems, a finding that is
borne out in our field experience with customers. Despite software developers’
best intentions, management is always looking for shortcuts. In the IoT
ecosystem, first to market is a huge competitive driver, so this could mean that
security, quality and dependability are sacrificed for speed to
release.
Accountability in algorithmic injustice
Often, journalists fixate on finding broken or abusive systems, but miss out on
what happens next. Yet, in the majority of cases, little to no justice is found
for the victims. At most, the faulty systems are unceremoniously taken out of
circulation. So, why is it so hard to get justice and accountability when
algorithms go wrong? The answer goes deep into the way society interacts with
technology and exposes fundamental flaws in the way our entire legal system
operates. “I suppose the preliminary question is: do you even know that you’ve
been shafted?” says Karen Yeung, a professor and an expert in law and technology
policy at the University of Birmingham. “There’s just a basic problem of total
opacity that’s really difficult to contend with.” The ADCU, for example, had to
take Uber and Ola to court in the Netherlands to try to gain access to more
insight on how the company’s algorithms make automated decisions on everything
from how much pay and deductions drivers receive, to whether or not they are
fired. Even then, the court largely refused their request for information.
Further, even if the details of systems are made public, that’s no guarantee
people will be able to fully understand it either – and that includes those
using the systems.
Data Mesh: To Mesh or not to Mesh?
Data Mesh allows teams to curate/generate data and create usable data products
for other teams. It also makes certain that platform teams can put their efforts
into data engineering while data professionals can handle domain-specific data
issues. While business data professionals are responsible for the quality and
reliability of the data their teams produce, they can take assistance from
platform teams in the face of technical glitches. Apart from that, Data Mesh
design is inclined towards business users and requires relatively minor
interference from platform teams. This is unlike centralized data teams that are
responsible for everything, from data frameworks and access to dealing with
data-related requests. To conclude, Data Mesh or the decentralized architecture
encourages each party to excel in their area of expertise. The platform teams
need to focus on technology, engineering, and data pipelines, while the data
professionals are accountable for ensuring data quality. This holistic approach
ensures end-users can perform their tasks by leveraging data insights without
investing time in acquiring the results of a custom request.
Chase CIO details what entry-level job-seekers need to succeed in Fintech
Never stop learning. The skills you mastered a few years ago may be no longer
relevant today, which is why it’s important to be open to constantly learning.
Whether you are starting your career or have years of experience, take it upon
yourself to learn new skills and technologies. ... The skills required to be a
technologist have evolved, but also the ways with colleagues across lines of
business. One change we’ve really embraced as an organization is embarking on an
agile and product transformation. We’ve taken advantage of the opportunity that
came with the changing behaviors of consumers over the past few years to really
embrace agile at a different scale. This matters tremendously, because when we
deploy code or build an entirely new product, it helps millions of consumers
reach their financial goals. The pace of change has accelerated, but the focus
on making it easier for our customers to bank with Chase has not. Today, we’ve
reorganized ourselves away from project-based teams into product-based teams.
Each product now has a dedicated tech, product, design, and data & analytics
leader to help speed up decision making and improve connectivity and
collaboration.
Attacks using Office macros decline in wake of Microsoft action
"It's a hugely important step Microsoft is taking to start blocking these macros
by default, especially due to how invisible macros are to the majority of
users," adds Nathan Wenzler, chief security strategist at Tenable, a
vulnerability scanning company. "But that doesn't mean the threat is eradicated
or we shouldn't continue to remind users to be vigilant about opening files from
untrusted sources." Other companies are seeing threat actors switching tactics
because of Microsoft's move, too. "The adversaries are aware of it," observes
Tim Bandos, executive vice president of cybersecurity at Xcitium, a maker of an
endpoint security suite. "They're testing out new ways of working around it
because they're clearly not as successful now that Microsoft has made this
change." Users of one notorious malicious program, known as Emotet, have already
begun shifting tactics, he notes. "We've seen them shift recently from
leveraging macros to using URLs to OneDrive or Google Drive," he says.
Solana blockchain and the Proof of History
The consensus mechanism is a fundamental characteristic and differentiator among
blockchains. Solana's consensus mechanism has several novel features, in
particular the Proof of History algorithm, which enables faster processing time
and lower transaction costs. How PoH works is not hard to grasp conceptually.
It's a bit harder to understand how it improves processing time and transaction
costs. The Solana whitepaper is a deep dive into the implementation details, but
it can be easy to miss the forest for the trees. Conceptually, the Proof of
History provides a way to cryptographically prove the passage of time and where
events fall in that timeline. This consensus mechanism is used in tandem with
another more conventional algorithm like the Proof of Work (PoW) or Proof of
Stake (PoS). The Proof of History makes the Proof of Stake more efficient and
resilient in Solana. You can think of PoH as a cryptographic clock. It
timestamps transactions with a hash that guarantees where in time the
transaction occurred as valid. This means the entire network can forget about
verifying the temporal claims of nodes and defer reconciling the current state
of the chain.
Why and How our AI needs to understand causality
Introducing causality to machine learning can make the model outputs more
robust, and prevent the types of errors described earlier. But what does this
look like? How can we encode causality into a model? The exact approach depends
on the question we are trying to answer and the type of data we have available.
... They trained the model to ask “if I treat this disease, which symptoms would
go away?” and “if I don’t treat this disease, which symptoms would remain?”.
They encoded these questions as two mathematical formulae. Using these questions
brings in causality: if treating a disease causes symptoms to go away, then it’s
a causal relationship. They compared their causal model with a model that only
looked at correlations and found that it performed better — particularly for
rarer diseases and more complex cases. Despite the great potential of machine
learning, and the associated excitement, we must not forget our core statistical
principles. We must go beyond correlation (association) to look at causation,
and build this into our models.
Cyberattack prevention is cost-effective, so why aren’t businesses investing to protect?
To measure the success of an investment, you first need to quantify the cost of
what you’re trying to protect. In a simplified model, the first step is to
measure the given benefits of protection, this starts with an asset valuation.
How valuable is this data to me? Those in charge of the budget need to execute
the risk of that data not being protected. If I don’t take the necessary
measures to mitigate the risk by investing in preventative cyber-security tools,
how costly could this be when a breach occurs? It is more cost-effective to
validate an organisation’s controls rather than spending money on more tools. By
adopting specialised frameworks to counteract cyber threats, for instance,
running a threat-informed defence, utilising automated platforms such as
Breach-and-Attack Simulation (BAS), CISO’S can continuously test and validate
their system. Similar to a fire drill, BAS can locate which controls are
failing, allowing organisations to remediate the gaps in their defence, making
them cyber ready before the attack occurs.
Cyber Resiliency: How CIOs Can Prepare for a Cloud Outage
Beyond security issues, cloud outages can open the door to cascading disruptions
affecting both routine business and mission-critical applications. “This can
lead to [issues] ranging from revenue loss to more serious impacts -- such as
putting lives at risk in the case of critical health care applications,”
explains Ravikanth Ganta, a senior director at business consulting firm
Capgemini Americas. A cloud outage’s seriousness hinges on several factors,
including organization preparedness, the zone regions affected, and the services
impacted. “In many cases, businesses that build and run their applications in
the cloud can endure a cloud outage with little to no impact if they architect
their applications to take advantage of the automated failover capabilities
readily available in the cloud,” Potter notes. Modular applications designed to
leverage loosely coupled services will typically experience only a minor drop in
availability or performance during a vendor outage and, in many cases, may not
be affected all. “Customers that ... haven’t architected their applications to
gracefully failover or redirect traffic to unimpacted zones or regions, will
face greater availability challenges when a cloud provider experiences an
outage,” Potter says.
Why DesignOps Matters: How to Improve Your Design Processes
“A foundational aspect of DesignOps is the adoption of agile work breakdown
structures (WBSs) to organize UX work from alignment with broad strategic
objectives to screen-level details in a single EAP tool. While this feels
foreign to most UX practitioners at first, agile WBS maps quite well to UX work.
The business and operational benefits of this approach are profound, including
more accurate plans, estimates, tracking and reporting.” With a single working
environment for managers, designers, developers, and even stakeholders as part
of the DesignOps strategy, everyone can easily align their work and tasks, test
and comment on prototypes in real time, eliminate design handoffs, reduce costly
iterations, keep track of progress and identify bottlenecks. ... There’s no
such thing as a designer who can handle every process and task because in the
end, they do everything but the actual design. Digital product design is a
multi-layered job that requires different experienced units in particular
fields. Just as there is a need for a separation between UX and UI design with
two distinct experts handling each, there is a need for a dedicated DesignOps
person.
Quote for the day:
"The task of the leader is to get his
people from where they are to where they have not been." --
Henry A. Kissinger
No comments:
Post a Comment