
Despite assurances from stalwarts that AGI will benefit all of humanity, there
are already real problems with today’s single-purpose narrow AI algorithms that
calls this assumption into question. According to a Harvard Business Review
story, when AI examples from predictive policing to automated credit scoring
algorithms go unchecked, they represent a serious threat to our society. A
recently published survey by Pew Research of technology innovators, developers,
business and policy leaders, researchers, and activists reveals skepticism that
ethical AI principles will be widely implemented by 2030. This is due to a
widespread belief that businesses will prioritize profits and governments
continue to surveil and control their populations. If it is so difficult to
enable transparency, eliminate bias, and ensure the ethical use of today’s
narrow AI, then the potential for unintended consequences from AGI appear
astronomical. And that concern is just for the actual functioning of the AI. The
political and economic impacts of AI could result in a range of possible
outcomes, from a post-scarcity utopia to a feudal dystopia. It is possible too,
that both extremes could co-exist.
The teams using the visualization board were in different countries, so they
needed to address digital connection across time zones. This meant a more robust
process for things like retrospectives, more robust breakdown of stories into
tasks, more "scheduled" time for showcase and issue resolution, etc. The team
found that, while they worried a more defined process would stymie their
agility, it worked well in focusing their activities productively in line with
the broader objectives, without the necessity of being in constant
communication. They found they needed more overlapping work time, particularly
when they were in release planning and deployment. And they had to think about
and plan task/work turnover to the other team at the end of each day – something
they never had to do when in physical proximity. They’ve seen some team members
fall back into role-based activities more often. There simply isn’t the natural
communication and subsequent spark of curiosity that is truly the hallmark of
team collaboration.

Running a Kubernetes cluster in EKS, you get the possibility of using either a
standard Ubuntu image as the OS for your nodes, or you can use their optimized
EKS AMIs. This can help you get some better speed and performance rather than
running a generic OS. Once the cluster is running, there’s no way to enable
automatic upgrades of the Kubernetes version. While EKS does have excellent
documentation on how to upgrade your cluster, it is a manual process. If your
nodes start reporting failures, EKS doesn’t have a way of enabling auto-repair
like in GKE. This means you’ll have to either monitor that yourself and manually
fix nodes or set up your own system to repair broken nodes. As with GKE, you pay
an administration fee of $0.10 per hour per cluster when running EKS, after
which you only pay for the underlying resources. If you want to run your cluster
on-prem, it’s possible to do so either by using AWS Outposts or EKS Anywhere,
which launches sometime in 2021.

Those that had reset their devices, however, hadn’t quite wiped the slate clean
in the way they thought they had. Researchers found that, contrary to what
Amazon says, you can actually recover a lot of sensitive personal data stored on
factory reset devices. The reason for this is related to how these devices store
your information using NAND flash memory—a storage medium that, due to certain
processes, doesn’t actually delete the data when the device is reset. “We show
that private information, including all previous passwords and tokens, remains
on the flash memory, even after a factory reset. This is due to wear-leveling
algorithms of the flash memory and lack of encryption,” researchers write. “An
adversary with physical access to such devices (e.g., purchasing a used one) can
retrieve sensitive information such as Wi-Fi credentials, the physical location
of (previous) owners, and cyber-physical devices (e.g., cameras, door locks).”
Granted, said hypothetical snoopers would really have to know what they were
doing—and their data thieving would entail a certain amount of expertise.

In addition to technological solutions, a necessary element in building a strong
cybersecurity foundation is working with all internal and external stakeholders,
including law enforcement. More data helps enable more effective responses.
Because of this, cybersecurity professionals must openly partner with global or
regional law enforcement, like US-CERT. Sharing intelligence with law
enforcement and other global security organizations is the only way to
effectively take down cybercrime groups. Defeating a single ransomware incident
at one organization does not reduce the overall impact within an industry or
peer group. It’s a common practice for attackers to target multiple verticals,
systems, companies, networks and software. To make it more difficult and
resource-intensive for cybercriminals to attack, public and private entities
must collaborate by sharing threat information and attack data. Private-public
partnerships also help victims recover their encrypted data, ultimately reducing
the risks and costs associated with the attack. Visibility increases as public
and private entities band together.
A lot of organizations are moving from traditional on-premises application
deployments into one or multiple clouds. Now, those transitions carry with them
architectural baggage of how to architect networking and security elements for
this new cloud era, where applications are distributed all around in one
multi-cloud, software-as-a-service environment or even edge computing
environments. And so security is becoming very, very paramount to the success of
that motion. Now, we also know that security attacks are becoming increasingly
sophisticated, and that’s especially true when applications are moving to the
cloud. And cloud infrastructure is not always to the same level of capabilities
and features that enterprises have been used to in their on-premises
environments. So, this security-oriented mindset is extremely important for
building these networks that now span not only the on-premises environment, but
also cloud environments.

We can see the automation being carried out at every phase of the development
starting from triggering of the build, carrying out unit testing, packaging,
deploying on to the specified environments, carrying out build verification
tests, smoke tests, acceptance test cases and finally deploying on to the
final production environment. Even when we say automating test cases, it is
not just the unit tests but installation tests, integration tests, user
experience tests, UI tests etc. DevOps forces the operations team, in addition
to development activities, to automate all their activities, like provisioning
the servers, configuring the servers, configuring the networks, configuring
firewalls, monitoring the application in the production system. Hence to
answer what to automate, it is build trigger, compiling and building,
deploying or installing, automating infrastructure set up as a coded script,
environment configurations as a coded script, needless to mention testing,
post-deployment life performance monitoring in life, logs monitoring,
monitoring alerts, pushing notifications to live and getting alerts from live
in case of any errors and warnings etc
Implementing databases and data analytics within cloud native applications
involves several steps and tools from data ingestion, preliminary storage, to
data preparation and storage for analytics and analysis. An open, adaptable
architecture will help you execute this process more effectively. This
architecture requires several key technologies. Container and Kubernetes
platforms provide a consistent foundation for deploying databases, data
analytics tools, and cloud native applications across infrastructure, as well
as self-service capabilities for developers and integrated compute
acceleration. PostgreSQL, Apache Kafka and Debezium can be deployed using
Kubernetes Operators on Kubernetes to provide a cloud native data analytic
solution that be can be used for a variety of use cases and across hybrid
cloud environments — including datacenter, public cloud infrastructure, and
the edge — for all stages of cloud native application development and
deployment.

Although there are subtle differences between Agile and DevOps Testing, those
working with Agile will find DevOps a little more familiar to work with (and
eventually adopt). While Agile principles are applied successfully in the
development & QA iterations, it is a different story altogether (and often
a bone of contention) on the operations side. DevOps proposes to rectify this
gap. Now, instead of Continuous Integration, DevOps involves “Continuous
Development”, where the code was written and committed to Version Control,
will be built, deployed, tested and installed on the Production environment
that is ready to be consumed by the end-user. This process helps everyone in
the entire chain since environments and processes are standardized. Every
action in the chain is automated. It also gives freedom to all the
stakeholders to concentrate their efforts on designing and coding a
high-quality deliverable rather than worrying about the various building,
operations, and QA processes. It brings down the time-to-live drastically to
about 3-4 hours, from the time code is written and committed, to deployment on
production for end-user consumption.

The rituals of Agile development are largely procedural and tactical. In
contrast, organizational agile transformation is driven by and reinforces
cultural norms that make staying agile possible. A development lead can compel
team members to participate in the process of daily scrums and weekly sprints.
Agile development doesn’t address the task of building genuine collaboration
or a culture of accountability. In contrast, an agile transformation requires
cultural support to move the organization into a state of resonant agility.
The state, in turn, reinforces and strengthens norms of collaboration and
accountability that an agile culture encourages. An agile culture takes a
broader view, beyond providing a prescriptive process for building something
specific. It pulls together stakeholders from multiple functional areas to
tackle an issue through organic, collaborative analysis. ... Next-generation
technologies are purpose-built, not broad platforms that force conformity
instead of innovation. There’s no one platform or suite of tools for an agile
organization. Teams work with an organic tech stack that gives them the
flexibility to use the best tool for the job, and everyone’s job is
different.
Quote for the day:
"Effective team leaders adjust their
style to provide what the group can't provide for itself." --
Kenneth Blanchard
No comments:
Post a Comment