The Role of EiPaaS in Enterprise Architecture: Part 1
The fourth stage of the enterprise architecture emerged as a result of internal
organizational changes and the external market outlook — mainly decentralized
architecture styles (microservices and cloud native) and agile processes. Each
function or LoB is looking for autonomy by recruiting their own technical teams
and owning the entire lifecycle (plan, build, test, run, manage) of the systems
and subsystems they make or buy. The enterprise architecture utilizes platforms
running on internal and external cloud infrastructures to facilitate this.
Multitenancy and segmentation are some of the techniques used to provide
platform capabilities to each LoB. As a result, the integration logic and the
implementation responsibility also move to each LoB. However, the platform
approach of this fourth stage incorporates centralized governance, security,
monitoring and standardization of technology and patterns. It is important to
use platforms in such an environment; otherwise, LoBs will start building shadow
IT applications private to the function and the IT team will lose control of
those applications.
Getting ahead of the curve on mitigating mobile fraud
Google and the other app store providers will continuously review their security
procedures to make their platforms and devices more secure. But big tech
companies like Google have to deal with so many new apps and updates constantly
that it’s inevitable that many malicious apps may find their way onto the store.
For a long time, too, there has been a case to educate customers about the
threats they face. Banks make noticeable efforts to warn customers about
potential threats like clicking suspicious links via SMS or email or not
downloading anything to their device from an untrusted source. But the truth is,
inevitably, someone will make a mistake as fraudsters use various techniques to
gain a user’s trust. With apps seeming completely harmless, it’s all too easy
for precisely this to happen. By the time banks warn their customers about
specific threats, the likelihood is that fraudsters are already evolving beyond
those techniques, finding new ways to fool their unsuspecting victims.
IT talent and the Great Resignation: 8 ways to nurture retention
Technology employees have never had more opportunities than they do right now to
advance their skills online, network at virtual events, and work remotely
without relocating to tech hubs. They can dip their toes in multiple pools and
switch streams relatively easily. And after months of toiling to keep their
organizations going amid turbulent times, the urge to seek out calmer (or more
rewarding) seas is strong. “IT professionals are highly valued members of
company teams, and opportunity for these skilled individuals to develop or move
on seems endless these days,” says Michele Bailey, author of The Currency Of
Gratitude: Turning Small Gestures Into Powerful Business Results and CEO of The
Blazing Group. “On top of that, the many changes and challenges brought by the
pandemic have increased stress levels among us all. There is certainly plenty of
reason for stressed-out IT leaders to look outside their existing roles for new
opportunities and a better work-life balance.” For CIOs who want to retain their
top talent, it can be a tough sell.
Kafka Or Pulsar? A Battle Of The Giants Concerning Streaming
The two-fold vision is, first, to build resiliency into software, such that
loosely coupled services can be started, stopped, paused, or restarted as
needed. By “services,” we mean the discrete programs that correspond to a
cloud-native app’s constitutive functions. This makes it possible to scale
cloud-native apps by adding or subtracting instances of services. Second, and
concomitant with this, cloud-native design aims to make business services
observable – i.e., susceptible to fine-grained control and manipulation – by
humans and machines alike. You are not scaling servers, storage, and network
capacity; you are, in effect, adjusting sliders that permit you to manipulate
the behavior of the service. Human beings can do this, manually … but so can
machines – automatically, in accordance with predefined rules. As I write in a
separate piece (for a different venue) that has not yet been published:
Observability instrumentation makes it easier for operations personnel to
provision extra resources in response to an observed service
impairment
Intelligent Process Automation Can Give Your Company a Powerful Competitive Advantage
McKinsey defines IPA as “a collection of business-process improvements and
modern technologies that combines fundamental process redesign with robotic
process automation (RPA), artificial intelligence (AI), machine learning (ML),
and cognitive technologies like optical character recognition (OCR) and
natural language processing (NLP).” It helps organizations redesign processes
and workflows in alignment with customer journeys for seamless experiences,
digitize data for personalization and insights, and automate mundane tasks to
achieve groundbreaking increases in productivity. In the world of operations,
IPA is a Swiss Army knife. CEOs love its power to transform customer and
employee experiences; CFOs appreciate its potential to grow efficiency
exponentially; line-of-business leaders like the clear results it delivers;
chief information officers embrace it as a digital accelerator and a way to
demonstrate business outcomes. One U.S. health insurer, after adopting IPA
across its enterprise, found it could process claims six times
faster.
Why Change Intelligence Is Necessary to Effectively Troubleshoot Modern Applications
To be able to truly gain the insights you require from your systems when
problems arise, you need to add another piece to the puzzle - and that is
Change Intelligence. Change Intelligence includes not only understanding when
something has changed, but also why it has changed, who changed it, and what
the impact of the change has had on your systems. The existing onslaught of
data is often overwhelming for operations engineers. Therefore, Change
Intelligence was introduced to provide a wider & broader context about the
telemetry, and the information that you already have. For example, if you have
three services talking to each other, and one of these services has an
elevated error rate, this is a good indicator that something is wrong
according to your telemetry. This is an excellent basis for suspecting
something is wrong in your system, however the next and more critical step
will always be to start digging to find the root cause that is the reason
behind this anomalous telemetry data.
Twitter: Head of Security Reportedly Fired; CISO to Leave
The social media platform in a memo shared with the employees accessed by The
New York Times reportedly said, "The changes followed an assessment of how the
organization was being led and the impact on top-priority work." Twitter's
head of privacy engineering, Lea Kissner, will become the company's interim
CISO, according to the report. Reportedly, after assuming the CEO position,
Agrawal reorganized the management staff and dismissed Dantley Davis, the
chief design officer, and Michael Montano, the head of engineering. In a
previous filing with the Securities and Exchange Commission, Twitter said that
Agrawal is restructuring the leadership team to drive increased
accountability, speed and operational efficiency, and shifting to a general
manager model for consumer, revenue and core technologies, which will be led
by Kayvon Beykpour, Bruce Falck and Nick Caldwell, respectively. "These GMs
will lead all core teams across engineering, product management, design, and
research.
Fraud detection is great, but you also need prevention
A lot of the complexity of fraud detection comes from the fact that most fraud
solutions focus solely on bad actors. They specialize in identifying the
criminals by looking for suspicious factors. A new approach which is becoming
more common is adding a stage before the fraud detection phase: positive
validation. The overwhelming majority of customers are real people, with real,
trustworthy histories and identities. If most of them can be identified
confidently at the start, then the fraud detection problem becomes more
manageable. All the fraud team’s resources can be spent on the cases where
there’s real cause for doubt, and can use judicious friction where appropriate
(such as email validation, or multi-factor authentication) in those cases.
Positive validation has become a practical possibility partially due to online
companies’ increased desire to collaborate with one another. Using
providerless technology, generally based on some form of Privacy Enhancing
Technology, companies can validate and vouch for trustworthy customers without
sharing any personal user information with one another.
How quantum computing is helping businesses to meet objectives
According to Oberreuter, once a quantum computer becomes involved in the
problem solving process, the optimal solution can really be found, allowing
businesses to find the best arrangements for the problem. While current
quantum computers, which are suitable for this kind of problems, called
quantum annealers now have over 5,000 qubits, many companies that enlist
Reply’s services often find that problems they have require more than
16,000-20,000 variables, which calls for more progress to be made in the
space. “You can solve this by making approximations,” commented the Reply data
scientist. “We’ve been writing a program that is determining an approximate
solution of this objective function, and we have tested it beyond the usual
number of qubits needed. “The system is set up in a way that prevents running
time from increasing exponentially, which results in a business-friendly
running time of a couple of seconds. This reduces the quality of the solution,
but we get a 10-15% better result than what business heuristics are typically
providing.”
EU Plans to Build Its Own DNS Infrastructure
A commission spokesperson tells Information Security Media Group, "This
initiative addresses the lack of significant EU investment in free and public
DNS resolution and enables the deployment of an alternative to existing
solutions in a market that is characterized by a consolidation of this service
in the hands of a few non-EU providers." The commission says this new DNS
infrastructure proposition is crucial because "the processing of DNS data can
have an impact on privacy and data protection rights" of internet users in EU.
The deployment and usage of this new infrastructure means that data protection
and privacy will be strictly governed by rules applicable in the EU - such as
GDPR, among others - and this will "ensure that DNS resolution data are
processed in Europe and personal data are not monetized." Currently, many DNS
resolvers do not recognize EU privacy legislation, such as GDPR and ePrivacy,
and could potentially allow operators to track user activity clandestinely and
block or manipulate requests such as inserting advertisements and custom
search results.
Quote for the day:
"No organization should be allowed
near disaster unless they are willing to cooperate with some level of
established leadership." -- Irwin Redlener
No comments:
Post a Comment