Windows 10 upgrades are rarely useful, say IT admins
There is a disconnect between Microsoft's efforts and expectations – months of
development time and testing to produce features and functionality that
customers will clamor for – and the reaction by, in electioneering terms, a
landslide-sized majority of those customers. In many cases, IT admins simply
shrug at what Microsoft trumpets. "I understand the concept of WaaS, and the
ability to upgrade the OS without a wipe/re-install is a good concept," one of
those polled said. "((But)) let's concentrate more on useful features, like an
upgraded File Explorer, a Start menu that always works, and context-sensitive
(and useful) help, and less on, 'It's time to release a new feature update,
whether it has any useful new features or not.'" Some were considerably
harsher in taking feature upgrades to task. "Don't have a clue why they think
some of the new features might be worth our time, or even theirs," said
another of those polled. And others decried what they saw as wasted
opportunities. "It's mostly bells, whistles and window-dressing," one IT admin
said. "It seems like no fundamental problems are tackled. Although updates DO
every now and then cause new problems in fundamental functionality. Looks like
there's at least some scratching done on the fundamental surface – ((but))
without explanation."
Adaptive Architecture: A Bridge between Fashion and Technology
Conceptually, IT borrowed a lot of themes from Civil Engineering, one being
Architecture. Despite the 3000 years that separate both areas, Architecture
& Software Architecture share similar words through the multiple definitions
that they have, such as "structure", "components", and "environment". At first,
that relationship was really strong because the technology was "more concrete",
heavier, and, obviously, slower. Everything was super difficult to change and
applications used to survive without an update for quite a long time. But, as
computers advance, the world is submerged in a massive flow of information on
digital platforms and customers can directly connect to businesses through these
channels, existing conditions that demand companies to be able to push reliable
modifications to their websites, or applications, every day, or even multiple
times throughout the day. This progress didn't happen overnight, and as digital
evolved, the technical landscape started to change, reflecting new requirements
and problems. In 2001, an initiative to understand these obstacles to develop
software, obstacles still relevant to this day, seventeen people gathered in the
Wasatch mountains of Utah. From that reunion, "The Agile Manifesto" was created,
a declaration based on four key values and 12 principles, establishing a mindset
called "Agile".
Deep Dive into OWIN Katana
OWIN stands for Open Web Interface for .NET. OWIN is a open standard
specification which defines a standard interface between .NET web servers and
web applications. The aim is to provide a standard interface which is simple,
pluggable and lightweight. OWIN is motivated by the development of web
frameworks in other coding languages such Node.js for JavaScript, Rack for
Ruby, and WSGI for Phyton. All these web frameworks are designed to be fast,
simple and they enable the development of web applications in a modular way.
In contrast, prior to OWIN, every .NET web application required a dependency
on >System.Web.dll, which tightly coupled with Microsoft's IIS (Internet
Information Services). This meant that .NET web applications came with a
number of application component stacks from IIS, whether they were actually
required or not. This made .NET web applications, as a whole, heavier and they
performed slower than their counterparts in other coding languages in many
benchmarks OWIN was initiated by members of Microsoft's communities; such as
C#, F# and dynamic programming communities. Thus, the specification is largely
influenced by the programming paradigm of those communities.
Banking on digitalisation: A transformation journey enabled by technology, powered by humans
Banks are now staring at the massive challenge of continuing their digital
investments in a cost constrained environment. Getting their workforce ready to
develop the technologies, while continuing to deliver value to their customers
is another issue. At the same time, they are competing with new digital banks
that will undoubtedly come in with newer technology built on modern architecture
without the legacy debt. However, there are industry players that may have
cracked the code to successful digitalisation. I know of incumbent banks as well
as digital banks developing world-class digital capabilities at lower costs,
while training their people to make full use of their new digital investments.
Recently the finance function of a leading global universal bank adopted a
“citizen-led” digital transformation, training 300+ “citizen” developers who
identified 200+ new use cases resulting in an annual run rate cost reduction of
$15 million. This case study highlights the importance of engaging and
upskilling your workforce while contributing to bottom line benefits. Over the
last two decades, technology by itself has evolved and now has the ability to
transform whole businesses in the financial services sector, similar to its
impact on other industries such as retail and media. Traditionally, for banks,
technology was a support function enabling product and customer strategies.
Google details RigL algorithm for building more efficient neural networks
Google researchers put RigL to the test in an experiment involving an image
processing model. It was given the task of analyzing images containing
different characters. During the model training phase, RigL determined that
the AI only needs to analyze the character in the foreground of each image and
can skip processing the background pixels, which don’t contain any useful
information. The algorithm then removed connections used for processing
background pixels and added new, more efficient ones in their places.
“The algorithm identifies which neurons should be active during training,
which helps the optimization process to utilize the most relevant connections
and results in better sparse solutions,” Google research engineers Utku Evci
and Pablo Samuel Castro explained in a blog post. “At regularly spaced
intervals we remove a fraction of the connections.” There are other methods
besides RigL that attempt to compress neural networks by removing redundant
connections. However, those methods have the downside of significantly
reducing the compressed model’s accuracy, which limits their practical
application. Google says RigL achieves higher accuracy than three of the most
sophisticated alternative techniques while also “consistently requiring fewer
FLOPs (and memory footprint) than the other methods.”
IBM, AI And The Battle For Cybersecurity
While older adversarial attack patterns were algorithmic and easier to detect,
new attacks add AI features such as natural language processing and a more
natural human computer interaction to make malware more evasive, pervasive and
scalable. The malware will use AI to keep changing form in order to be more
evasive and fool common detection techniques and rules. Automated techniques
can make the malware more scalable and combined with AI can move laterally
through an enterprise and attack targets without human intervention. The use
of AI in cybersecurity attacks will likely become more pervasive. Better spam
can be crafted that avoids detection or personalized to a specific target as a
form of spear phishing attack by using natural language processing to craft
more human like messages. In addition, malware can be smart enough to
understand when it is in a honeypot or sandbox and will avoid malicious
execution to look more benign and not tip off security defenses. Adversarial
AI attacks the human element with the use of AI augmented chatbots to disguise
the attack with human-like emulation. This can escalate to the point where AI
powered voice synthesis can fool people into believing that they’re dealing
with a real human within their organization.
'We built two data centers in the middle of the pandemic'
With a substantial proportion of chips and components coming from the Wuhan
region in China, supply chains were already facing delays. After negotiation
with suppliers, Harvey's team managed to procure the right equipment on time,
air-freighting components to the island from the UK mainland instead of using
ferry services as usual. As the state of Guernsey started restricting
travel, a local Agilisys team was then designated to pick up the data centers'
build. The team's head of IT services Shona Leavey remembers juggling the
requirements for the build, while also setting up civil servants with laptops
to make sure the state could continue to deliver public services, even
remotely. "We were rolling out Teams to civil servants, and at the same
time had some of the team working on the actual data center build," Leavey
tells ZDNet. "Any concept of a typical nine-to-five went out the window."
Given the timeline for the build, it became evident that some engineers would
have to go into the data centers to set up the equipment during the early
months of summer. That meant the Agilisys team started a long, thorough,
health and safety assessment.
Deepfake Detection Poses Problematic Technology Race
The problem is well known among researchers. Take Microsoft's Sept. 1
announcement of a tool designed to help detect deepfake videos. The Microsoft
Video Authenticator detects possible deepfakes by finding the boundary between
inserted images and the original video, providing a score for the video as it
plays. While the technology is being released as a way to detect issues during
the election cycle, Microsoft warned that disinformation groups will quickly
adapt. "The fact that [the images are] generated by AI that can continue to
learn makes it inevitable that they will beat conventional detection
technology," said Tom Burt, corporate vice president of customer security and
trust, and Eric Horvitz, chief scientific officer, in a blog post describing the
technology. "However, in the short run, such as the upcoming US election,
advanced detection technologies can be a useful tool to help discerning users
identify deepfakes." Microsoft is not alone in considering current deepfake
detection technology as a temporary fix. In its Deep Fake Detection Challenge
(DFC) in early summer, Facebook found the winning algorithm only accurately
detected fake videos about two-thirds of the time.
Deliver Faster by Killing the Test Column
Instead of testers simply picking work out of this column and working on it till
it’s done, they should work with the team to help them understand how they
approach testing, the types of things they are looking for and also finding
during testing. Doing this with a handful of tasks is likely to help them
identify some key themes within their work. For example, are there similar root
causes such as usability or accessibility issues, or some hardware/software
combination that always results in a bug? Is there something the devs could look
out for while making the changes? These themes can be used to create a backlog
of tasks that the team can begin to tackle to see if they can be addressed
earlier on in the development life cycle. By focusing on the process and not the
people, it makes it easier to talk about what testers are doing, how developers
and testers could mitigate this work earlier on in the life cycle, and begins to
be the seeds of the continuous improvement programme. Leadership in this process
is very important. Leaders need to help testers feel comfortable that they are
not being targeted as the "problem" within the team, but are actually the
solution in educating the team in what risks they are looking for when testing.
Mitigating Cyber-Risk While We're (Still) Working from Home
At home, most folks use a router provided by their Internet service provider.
The home router has a firewall and NAT functionality so your family can safely
connect out to your favorite websites, and those websites can send the data you
asked for back to you. However, with most employees now working at home,
enterprise-grade firewalls at the edge of corporate networks are no longer
protecting them or providing the needed visibility for IT to help keep the
corporate users safe. That's where having an endpoint security solution that can
provide visibility, segment and limit access between different internal networks
and laptop devices can come in handy. With CISOs, government employees, and
business executives sharing home networks with their 15-year-old gamers and
TikTok addicts, it's imperative to extend the principles of least privilege to
the systems with important data inside the home network. Meaning that even if a
bad actor gains access to your kid's network, your laptop and organization's
internal assets stay in the clear. When it comes to proactively protecting
against cyber threats, segmentation is one of the best ways to ensure that bad
actors stay contained when they breach the perimeter. Because, let's be honest,
it's bound to happen.
Quote for the day:
"Challenges are what make life interesting and overcoming them is what makes life meaningful." --Joshua Marine
No comments:
Post a Comment