Enterprise Architecture and Tech Debt
Architects must assess the changed needs of the business, – customers, staff,
supply chain and identify efficient technology to support those new
requirements. There is opportunity to walk away from legacy technology
containing Unplanned Tech Debt that has never been corrected, the result of
poor practices or poorly communicated requirements. The move to remote
workspace may present the option to discontinue the use of equipment or
applications that have become instances of Creeping Tech Debt where features
become obsolete, replaced by the better, faster more capable upgrades. Or, the
applications and operating systems are no longer supported, causing security
vulnerabilities. Changes in market dynamics as the customer base
struggles to understand their new needs, constraints and opportunities invite
architects and product developers to consider incurring Intentional Tech Debt.
By releasing prototypes and minimal viable products (MVPs) customers become
partners in product development, helping to build the plane even as it reaches
cruising altitude. Architects know this will entail false starts as perceived
requirements morph or fade away and require rework as the product matures.
Understanding GraphQL engine implementations
Generic and flexible are the key words here and it’s important to realize that
it’s hard to keep generic APIs performant. Performance is the number one
reason that someone would write a highly customized endpoint in REST (e.g. to
join specific data together) and that is exactly what GraphQL tries to
eliminate. In other words, it’s a tradeoff, which typically means we can’t
have the cake and eat it too. However, is that true? Can’t we get both the
generality of GraphQL and the performance of custom endpoints? It depends! Let
me first explain what GraphQL is, and what it does really well. Then I’ll
discuss how this awesomeness moves problems toward the back-end
implementation. Finally, we’ll zoom into different solutions that boost the
performance while keeping the generality, and how that compares to what we at
Fauna call “native” GraphQL, a solution that offers an out-of-the-box GraphQL
layer on top of a database while keeping the performance and advantages of the
underlying database. Before we can explain what makes a GraphQL API “native,”
we need to explain GraphQL. After all, GraphQL is a multi-headed beast in the
sense that it can be used for many different things. First things first:
GraphQL is, in essence, a specification that defines three things: schema
syntax, query syntax, and a query execution reference.
Digital transformation starts with software development
Software development is another key requirement for businesses that are
pursuing digital transformation quests. Leveraging technology and ensuring it
is able to offer reliable and high quality results is a key focus for the
majority of companies. At this stage, it is important for businesses to
acknowledge what its strategic goals are and implement software that is going
to help it reach those ambitions and achieve tangible results. Businesses
should also ensure the technology it selects is equipped with sustainable
software that is going to withstand time, inevitable digital advances and
deliver the requirements of the new normal. In addition, today’s current
climate has emphasised the importance of providing teams with reliable
software that enables them to work remotely and complete projects without any
constraints. In the midst of the pandemic, 60% of the UK’s adult population
were working remotely. Unfortunately, many businesses did not have the
technology in place to cope with this immediate change. Therefore, IT decision
makers and leaders had to undergo a rapid shift to remain agile and maintain
continuity during this unprecedented time. By keeping software up to date and
regularly enhancing tools, employees can remain productive and maintain a high
level of communication with colleagues.
We need to be more imaginative about cybersecurity than we are right now
“Trying to achieve security is something of a design attitude—where at every
level in your system design, you are thinking about the possible things that
can go wrong, the ways the system can be influenced, and what circuit-breakers
you might have in place in case something unforeseen happens,” said Mickens.
“That seems like a vague answer because it is: There isn’t a magic way to do
it.” Designers, Mickens continued, might even need to consider the political
or ethical mindset of the people using their system. “There’s no simple way to
figure out if our system is going to be used ethically or not, because ethics
itself is very poorly defined. And when we think about security, we need to
have a similarly broad attitude, saying that there are fundamental questions
which are ambiguous, and which have no clean answer—‘What is security and how
do I make my product secure?’ As a result, we need to be more imaginative than
we are right now.” Thus, suggested Zittrain, the question has moved to the
supply side: Consumers want safe products, and the onus is on designers to
provide them. This, he said, opens an even thornier question: Does there need
to be a regulatory board for people producing code, and if not, “What would
incent the suppliers to worry about systematic risks that might not even be
traced back to them?”
How to Make DevOps Work with SAFe and On-Premise Software
The main issues we dealt with in speeding up our delivery from a DevOps
perspective were: testing (unit and integration), pipeline security check,
licensing (open source and other), builds, static code analysis, and
deployment of the current release version. For some of these problems we had
the tools, for some, we didn’t and we had to integrate new tools. Another
issue was the lack of general visibility into the pipeline. We were unable to
get a glimpse of what our DevOps status was, at any given moment. This was
because we were using many tools for different purposes and there was no
consolidated place where someone could take a look and see the complete status
for a particular component or the broader project. Having distributed teams is
always challenging getting them to come to the same understanding and
visibility for the development status. We implemented a tool to enable a
standard visibility into how each team was doing and how the SAFe train(s)
were doing in general. This tool provided us with a good overview of the
pipeline health. The QA department has been working as the key-holder of the
releases. Its responsibility is to check the releases against all bugs and not
allow the version to be released if there are critical bugs.
The Two Sides of AI in the Modern Digital Age
We will now discuss some of its more sinister aspects. As we’ve already
mentioned, as the digital landscape welcomes an increasing number of
technological advancements, so does the threat landscape. With rapid
progress in the cybersecurity arena, cybercriminals have turned to AI to amp
up on their sophistication. One such way through which hackers leverage the
potential of artificial intelligence is by using AI to hide malicious codes
in otherwise trustworthy applications. The hackers program the code in such
a way that it executes after a certain period has elapsed, which makes
detection even more difficult. In some cases, cybercriminals programmed the
code to activate after a particular number of individuals have downloaded
the application, which maximizes the attack’s attack’s impact. Furthermore,
hackers can manipulate the power offered by artificial intelligence, and use
the AI’s ability to adapt to changes in the environment for their gain.
Typically, hackers employ AI-powered systems adaptability to execute stealth
attacks and formulate intelligent malware programs. These malware programs
can collect information on why previous attacks weren’t successful during
attacks and act accordingly.
A Pause to Address 'Ethical Debt' of Facial Recognition
This pause is needed. All too often, ethics lags technology. With all apologies to Jeff Goldblum, there's no need to be hunted by intelligent dinosaurs to realize that we often do things because "we can rather than that we should." This ACM's call for restraint is appropriate, although a few issues remain. What about the facial data that already exists from currently deployed systems? This is not unique to facial recognition, but rather one that is well known from GDPR compliance and other use cases. The stoppage is intended for private and public entities, but personal cameras — and an opening for facial recognition — are rapidly becoming ubiquitous. Log in to your neighborhood watch program for a close-to-home example. (What street doesn't have a doorbell camera?) Public life is being monitored and passive data on our habits and lives is continually collected; any place that there is a camera, facial recognition technology is in play. The call by the ACM could be stronger. They urge the immediate suspension of use of facial recognition technology anywhere that is "known or reasonably foreseeable to be prejudicial to established human and legal rights." What is considered reasonable here? Is good intent enough to absolve misuse of these systems from blame, for instance?DevOps best practices Q&A: Automated deployments at GitHub
Ultimately, we push code to production on our own GitHub cloud platform, on
our data centers, utilizing features provided by the GitHub UI and API along
the way. The deployment process can be initiated with ChatOps, a series of
Hubot commands. They enable us to automate all sorts of workflows and have a
pretty simple interface for people to engage with in order to roll out their
changes. When folks have a change that they’d like to ship or deploy to
github.com, they just need to run .deploy with a link to their pull request
and the system will automatically deconstruct what’s within that link, using
GitHub’s API for understanding important details such as the required CI
checks, authorization, and authentication. Once the deployment has
progressed through a series of stages—which we will talk about in more
detail later—you’re able to merge your pull request in GitHub, and from
there you can continue on with your day, continue making improvements, and
shipping features. The system will know exactly how to deploy it, which
servers are involved, and what systems to run. The person running the
command has no need to know that it’s all happening. Before any changes are
made, we run a series of authentication processes to ensure a user even has
the right access to run these commands.
Exploring the prolific threats influencing the cyber landscape
Ransomware has quickly become a more lucrative business model in the past
year, with cybercriminals taking online extortion to a new level by
threatening to publicly release stolen data or sell it and name and shame
victims on dedicated websites. The criminals behind the Maze, Sodinokibi
(also known as REvil) and DoppelPaymer ransomware strains are the pioneers
of this growing tactic, which is delivering bigger profits and resulting in
a wave of copycat actors and new ransomware peddlers. Additionally, the
infamous LockBit ransomware emerged earlier this year, which — in addition
to copying the extortion tactic — has gained attention due to its
self-spreading feature that quickly infects other computers on a corporate
network. The motivations behind LockBit appear to be financial, too. CTI
analysts have tracked cybercriminals behind it on Dark Web forums, where
they are found to advertise regular updates and improvements to the
ransomware, and actively recruit new members promising a portion of the
ransom money. The success of these hack-and-leak extortion methods,
especially against larger organizations, means they will likely proliferate
for the remainder of 2020 and could foreshadow future hacking trends in
2021.
Unsecured Voice Transcripts Expose Health Data - Again
In a report issued Tuesday, security researchers at vpnMentor write that
they discovered the exposed voice transcript records in early July and
contacted Pfizer about the problem three times before the pharmaceutical
company finally responded on Sept. 22 and fixed the issue on Sept. 23.
Contained in the exposed records were personally identifiable information,
including customers' full names, home addresses, email addresses, phone
numbers and partial details of health and medical status, the report says.
... However, upon further investigation, we found files and
entries connected to various brands owned by Pfizer," including Lyrica,
Chantix, Viagra and cancer treatments Ibrance and Aromasin, the report says.
Eventually, the vpnMentor team concluded the exposed bucket most likely
belonged to the company's U.S. Drug Safety Unit. "Once we had concluded our
investigation, we reached out to Pfizer to present our findings. It took two
months, but eventually, we received a reply from the company." In a
statement provided to Information Security Media Group, the pharmaceutical
company says: "Pfizer is aware that a small number of non-HIPAA data records
on a vendor-operated system used for feedback on existing medicines were
inadvertently publicly available. ..."
Quote for the day:
"A leader or a man of action in a crisis almost always acts subconsciously and then thinks of the reasons for his action." -- Jawaharlal Nehru
No comments:
Post a Comment