Proving the value of analytics on the edge
Las Vegas began deploying edge computing technology in 2018 while working on
smart traffic solutions. A key driver for analyzing data at the network edge
came from working with autonomous vehicle companies that needed near real-time
data, Sherwood says. “Edge computing allowed for data to be analyzed and
provided to the recipient in a manner which provided the best in speed,”
Sherwood says. Visualizing data in a real-time format “allows for
decision-makers to make more informed decisions.” The addition of predictive
analytics and artificial intelligence (AI) is helping with decisions that are
improving traffic flows, “and in the near future will have dramatic impacts on
reducing traffic congestion and improving transit times and outcomes,” Sherwood
says. To help bolster its data analytics operations overall and at the edge, the
city government is developing a data analytics group as an offshoot of the IT
department. The Office of Data and Analytics will drive how data is governed and
used within the organization, Sherwood says. “We see lots of opportunities with
many new technologies coming onto the market,” he says.
The Fundamentals of Testing with Persistence Layers
In order to learn how to test with databases, one must first ‘unlearn’ a few
things starting with the concept of unit tests and integration tests. To put it
bluntly, the modern definitions of these terms are so far removed from their
original meanings that they are no longer useful for conversation. So, for the
remainder of this article, we aren’t going to use either of them. The
fundamental goal of testing is to produce information. A test should tell you
something about the thing being tested you may not have known before. The more
information you get the better. So, we are going to ignore anyone who says, “A
test should only have one assertion” and replace it with, “A test should have as
many assertions as needed to prove a fact”. The next problematic expression we
need to deal with is, “All tests should be isolated”. This is often
misunderstood to mean each test should be full of mocks so the function you’re
testing is segregated from its dependencies. This is nonsense, as that function
won’t be segregated from its dependencies in production.
Should We Resign Ourselves To The Great Resignation?
Is the Great Resignation a temporary trend or a long-term structural change?
There’s no way to know but my money is on the latter. Life-changing events
change lives, whether or not we realize it as it is occurring. An individual
crisis changes individual behavior, worldwide crises cause lasting social and
cultural consequences. The pandemic completely upended the employee experience,
and while many employers continued to monitor productivity, most didn’t devote
nearly the same amount of effort to soliciting real-time, real-world feedback
from remote workers about the challenges, struggles and stresses they were
facing. McKinsey identified “employees prioritize relational factors, whereas
employers focus on transactional ones”. By neglecting to engage with remote
employees, not listening to nor addressing their issues and concerns, employers
missed a once-in-a-lifetime opportunity to build trust in within the
organization and loyalty from workers. As the Great Resignation plays out and
the workforce reshuffles, it will be interesting to see if employers and workers
can engage, listen, and trust each other enough to find common ground.
How cyberattacks are changing according to new Microsoft Digital Defense Report
Ransomware offers a low-investment, high-profit business model that’s
irresistible to criminals. What began with single-PC attacks now includes
crippling network-wide attacks using multiple extortion methods to target both
your data and reputation, all enabled by human intelligence. Through this
combination of real-time intelligence and broader criminal tactics, ransomware
operators have driven their profits to unprecedented levels. This human-operated
ransomware, also known as “big game ransomware,” involves criminals hunting for
large targets that will provide a substantial payday through syndicates and
affiliates. Ransomware is becoming a modular system like any other big business,
including ransomware as a service (RaaS). With RaaS there isn’t a single
individual behind a ransomware attack; rather, there are multiple groups. For
example, one threat actor may develop and deploy malware that gives one attacker
access to a certain category of victims; whereas, a different actor may merely
deploy malware.
Cybersecurity awareness month: Fight the phish!
Simply put, the phishing “game” only has two moves: the scammers always play
first, trying to trick you, and you always get to play second, after they’ve
sent out their fake message. There’s little or no time limit for your move; you
can ask for as much help as you like; you’ve probably got years of experience
playing this game already; the crooks often make really silly mistakes that are
easy to sp …and if you aren’t sure, you can simply ignore the message that the
crooks just sent, which means you win anyway! How hard can it be to beat the
criminals every time? Of course, as with many things in life, the moment you
take it for granted that you will win every time is often the very same moment
that you stop being careful, and that’s when accidents happen. Don’t forget that
phishing scammers get to try over and over again. They can use email attachments
one day, dodgy web links the next, rogue SMSes the day after that, and if none
of those work, they can send you fraudulent messages on a social network: The
crooks can try threatening you with closing your account, warning you of an
invoice you need to pay, flattering you with false praise, offering you a new
job, or announcing that you’ve won a fake prize.
Edge computing: The architecture of the future
As technology extends deeper into every aspect of business, the tip of the
spear is often some device at the outer edge of the network, whether a
connected industrial controller, a soil moisture sensor, a smartphone, or a
security cam. This ballooning internet of things is already collecting
petabytes of data, some of it processed for analysis and some of it
immediately actionable. So an architectural problem arises: You don’t want to
connect all those devices and stream all that data directly to some
centralized cloud or company data center. The latency and data transfer costs
are too high. That’s where edge computing comes in. It provides the
“intermediating infrastructure and critical services between core datacenters
and intelligent endpoints,” as the research firm IDC puts it. In other words,
edge computing provides a vital layer of compute and storage physically close
to IoT endpoints, so that control devices can respond with low latency – and
edge analytics processing can reduce the amount of data that needs to be
transferred to the core.
Test Automation for Software Development
Automating software and security testing in software development is an ongoing
process, yet truly reaching full automation may never happen. In SmartBear
Software’s “2021 State of Software Quality | Testing” the percentage of
organizations that conduct all tests manually rose from 5% in 2019 to 11% in
2021. This does not mean that automation is not happening. On the contrary,
both manual and automated tests are being conducted. The biggest challenge to
test automation is no longer dealing with changing functionality but instead
not having enough time to create and conduct tests. Testers are not being
challenged by demands to deploy more frequently but instead to test more
frequently across more environments. Testing of the user interface layer is
more common, and to address this 50% currently conduct some automated
usability testing as compared to just 34% in 2019. The remainder of the
article provides additional highlights on this and two other reports that
highlight DevSecOps metrics and practices.
API Design Principles and Process at Slack
Slack’s list of design principles begins with each API doing one thing well
and the developer experience. The first is that APIs should focus on a
specific use case, thus becoming more straightforward, safer and easier to
scale. The authors believe that APIs should be so well designed and documented
that developers should be able to build a simple use case in a matter of
minutes and discover parts of the API intuitively. In case of errors, the API
should return all the information necessary for developers to understand the
cause of the error and take the first steps towards solving it. The fifth
principle concerns scale and performance. The authors provide concrete advice,
recommending pagination of big collections, avoiding nesting big collections
inside other big collections, and implementing rate limiting on the API. The
last principle enumerated by the authors is that breaking changes should be
avoided.
How to Build a Strong and Effective Data Retention Policy
The first step toward creating a comprehensive DRP strategy is to identify the
specific business needs the retention policy must address. The next step
should be reviewing the compliance regulations that are applicable to the
entire organization. “Designate a team of individuals across various business
practices to begin data inventorying and devising a plan to implement and
maintain a data retention policy that meets your business requirements while
adhering to compliance regulations,” Gandhi advises. The enterprise's chief
data officer (CDO) should oversee the DRP's design and implementation,
Ferreira recommends. “However, everyone who deals with the data must be aware
of the mechanisms implemented ... so that they can behave in ways that
facilitate the implementation of the DRP,” he adds. “Implementing a robust DRP
may be a top-down decision, but it requires buy-in from all levels of the
organization.” Stakeholders from records, legal, IT, security, privacy, and
other relevant posts and departments all need a chance to weigh in on an
enterprise's data retention policy, Read says.
FSU’s university-wide resiliency program focuses on doing the basics better
In addition to its far-reaching geographical footprint, FSU has a broad range
of operational needs to support the diversity of work typical of a university.
It also has distributed IT. All those factors make for additional levels of
complexity within disaster recovery and business continuity plans.
Furthermore, at the time of the audit, the university had 307 different units
expected to devise their own disaster and recovery plans as well as complete
an annual 140-question risk assessment. Hunkapiller sought to overcome
those complexities by using a multipronged approach to first tackle the
inadequacies in the university’s business continuity, disaster preparedness
and response capabilities and then encourage continuous improvement. “The idea
was to better identify risks, improve our vulnerability management and
resiliency plans, ensure continuity of operations and bring risk down to a
level that was tolerable,” says Hunkapiller, who worked with FSU’s Department
of Emergency Management to devise Seminole Secure.
Quote for the day:
"So much of what we call management
consists in making it difficult for people to work." --
Peter Drucker
Great insights! Thanks for sharing.
ReplyDelete