How realistic is the promise of low-code?
“Grady Booch, one of the fathers of modern computer science, said the whole
history of computer scientists layering is adding new layers of abstraction.
On top of existing technology, low-code is simply a layer of abstraction that
makes the process of defining logic, far more accessible for the most people.
“Even children are being taught the code programming through languages such as
MIT‘s scratch, a visual programming language. Just like humans communicate
through both words and pictures with a picture, being worth roughly 1000
words. So, developers can develop using both code, and low-code or visual
programming languages. “Visual language is much more accessible for many
people, as well, much safer. So many business users who are great subject
matter experts can make small dips into defining logic or user interfaces,
through low-code systems, without necessarily having to commit hours and days
to developing a feature through more sophisticated methods.” ... Tools
that use a visual node editor to create code paths are impressive but the code
still exists as a base layer for advanced control. I once built a complete
mobile video game using these visual editors. Once workflows get slightly more
complex it’s helpful to be able to edit the code these tools generate.
“The Surgical Team” in XXI Century
In the surgical team of XXI century, every artifact shall have a designated
owner. With ownership comes responsibility for quality of the artifact which
is assessed by people who consume it (for example, consumers of designs are
developers, and consumers of code are other developers who need to review it
or interface with it). Common ownership as advocated by Extreme Programming
can only emerge as the highest form of individual ownership in highly stable
teams of competent people who additionally developed interpersonal
relationships (a.k.a. friendship), and feel obligated to support one another.
In other situations, collective ownership will end up with tragedy of commons
caused by social loathing. Each team member will complete his assignments with
least possible effort pushing consequences of low quality on others (quality
of product artifacts becomes "the commons"). This is also the reason why
software development outsourcing is not capable of producing quality
solutions. The last pillar is respect. It is important for architect and
administrator not to treat developers, testers and automation engineers as
replaceable grunts (a.k.a. resources). An architect being the front-man of the
team needs to be knowledgeable and experienced but it doesn’t mean that
developers or testers aren’t.
The great rebalancing: working from home fuels rise of the 'secondary city'
There are already signs of emerging disparity. Weekday footfall in big urban
centres, which plummeted during lockdown, has not bounced back – the latest
figures suggest less than one-fifth of UK workers have returned to their
physical workplaces – which has led to reductions in public transport. This
disadvantages low-income workers and people of colour, and has led to job
losses at global chains such as Pret a Manger and major coffee franchises.
Meanwhile, house prices in the Hamptons have reached record highs as wealthy
New Yorkers have opted to weather the pandemic at the beach. Companies have
also started capitalising on reduced occupancy costs – potentially passing
them on to workers. The US outdoors retailer REI plans to sell its brand-new
Seattle campus, two years in the making, in favour of smaller satellite
sites. In the UK, government contractor Capita is to close more than a third
of its 250 offices after concluding its 45,000 staff work just as
efficiently at home. Not every community will be able to take advantage of
the remote working boom, agrees Serafinelli. Those best placed to do so
already have – or are prepared to invest in – good-quality schools,
healthcare and transport links.
Deno Introduction with Practical Examples
Deno was originally announced in 2018 and reached 1.0 in 2020, created by
the original Node.js founder Ryan Dahl and other mindful contributors. The
name DE-NO may seem odd until you realize that it is simply the interchange
of NO-DE. The Deno runtime: Adopts security by default. Unless explicitly
allowed, Deno disallows file, network, or environment access; Includes
TypeScript support out-of-the-box; Supports top-level
await; Includes built-in unit testing and code formatting (deno
fmt); Is compatible with browser JavaScript APIs: Programs authored in
JavaScript without the Deno namespace and its internal features should work
in all modern browsers; Provides a one-file executable bundler through
deno bundle command which lets you share your code for others to run without
installing Deno. ... Putting simplicity and security into consideration,
Deno ships with some browser-related APIs which allows you to create a web
server with little or no difference from a client-side JavaScript
application, with APIs including fetch(), Web Worker and WebAssembly. You
can create a web server in Deno by importing the http module from the
official repo. Although there are already many libraries out there, the Deno
system has also provided a straightforward way to accomplish this.
How to Successfully Integrate Security and DevOps
As digitalization transforms industries and business models, organizations
increasingly are adopting modern software engineering practices such as
DevOps and agile to become competitive in the modern marketplace. DevOps
enables organizations to release new products and features faster, but this
pace and frequency of application releases can conflict with established
practices of handling security and compliance. This leads to the enterprise
paradox to go faster and innovate but stay secure by avoiding compromises on
controls. However, integrating security into DevOps efforts (DevSecOps)
across the whole product life cycle rather than being handled independently
or left until the end of the development process after a product is released
can help organizations significantly reduce their risk posture, making them
more agile and their products more secure and reliable. When properly
implemented, DevSecOps offers immense benefits such as easy remediation of
vulnerabilities and a tool to mitigate against cost overruns due to delays.
It also enables developers to tackle security issues more quickly and
effectively.
Forrester: CIOs must prepare for Brexit data transfer
According to the Information Commissioner’s Office (ICO), while the
government has said that transfers of data from the UK to the European
Economic Area (EEA) will not be restricted, from the end of the transition
period, unless the EC makes an adequacy decision, GDPR transfer rules will
apply to any data coming from the EEA into the UK. The ICO website
recommended that businesses consider what GDPR safeguards they can put in
place to ensure that data can continue to flow into the UK. Forrester also
highlighted the lack of an adequacy decision, which it said would impact the
supply chain of all businesses that rely on technology infrastructure in the
UK when dealing with European citizens’ personal data. The analyst firm
predicted that cloud providers will start to provide a way for their
customers to make this transition. The authors of the report recommended
that companies should focus on assessing compliance with UK data protection
requirements, including the UK’s GDPR, and determine how lack of an adequacy
decision will impact data transfers and work on a transition strategy. While
the ICO is the UK’s supervisory authority (SA) for the GDPR, in July the
European Data Protection Board (EDPB) stated that it will no longer qualify
as a competent SA under the GDPR at the end of the transition period.
Ransomware vs WFH: How remote working is making cyberattacks easier to pull off
"You have a much bigger attack surface; not necessarily because you have
more employees, but because they're all in different locations, operating
from different networks, not working with the organisation's perimeter
network on multiple types of devices. The complexity of the attack surface
grows dramatically," says Shimon Oren, VP of research and deep learning at
security company Deep Instinct. For many employees, the pandemic could have
been the first time that they've ever worked remotely. And being isolated
from the corporate environment – a place where they might see or hear
warnings over cybersecurity and staying safe online on a daily basis, as
well as being able to directly ask for advice in person, makes it harder to
make good decisions about security. "That background noise of security is
kind of gone and that makes it a lot harder and security teams have to do a
lot more on messaging now. People working at home are more insular, they
can't lean over and ask 'did you get a weird link?' – you don't have anyone
do to that with, and you're making choices yourself," says Sherrod DeGrippo,
senior director of threat research at Proofpoint. "And the threat actors
know it and love it. We've created a better environment for them," she adds.
Machine learning in network management has promise, challenges
It’s difficult to say how rapidly enterprises are buying AI and ML systems,
but analysts say adoption is in the early stages. One sticking point is
confusion about what, exactly, AI and ML mean. Those imagining AI as being
able to effortlessly identify attempted intruders, and to analyze and
optimize traffic flows will be disappointed. The use of the term AI to
describe what’s really happening with new network management tools is
something of an overstatement, according to Mark Leary, research director at
IDC. “Vendors, when they talk about their AI/ML capabilities, if you get an
honest read from them, they’re talking about machine learning, not AI,” he
said. There isn’t a hard-and-fast definitional split between the two terms.
Broadly, they both describe the same concept—algorithms that can read data
from multiple sources and adjust their outputs accordingly. AI is most
accurately applied to more robust expressions of that idea than to a system
that can identify the source of a specific problem in an enterprise
computing network, according to experts. “We’re probably overusing the term
AI, because some of these things, like predictive maintenance, have been in
the field for a while now,” said Jagjeet Gill, a principal in Deloitte’s
strategy practice.
The Past and Future of In-Memory Computing
“With the explosion in the adoption of IoT (which is soon to be catalyzed by
5G wireless networking), countless data sources in our daily life now
generate continuous streams of data that need to be mined to save lives,
improve efficiency, avoid problems and enhance experiences,” Bain says in an
email to Datanami. “Now we can track vehicles in real-time to keep drivers
safe, ensure the safe and rapid delivery of needed goods, and avoid
unexpected mechanical failures. Health-tracking devices can generate
telemetry that enables diagnostic algorithms to spot emerging issues, such
as heart irregularities, before it becomes urgent. Web sites can track
e-commerce shoppers to assist them in finding the best products that meet
their needs.” IMDGs aren’t ideal for all streaming or IoT use cases. But
when the use case is critical and time is of the essence, IMDGs will be have
a role in orchestrating the data and providing fast response times. “The
combination of memory-based storage, transparent scalability, high
availability, and integrated computing offered by IMDGs ensures the most
effective use of computing resources and leads to the fastest possible
responses,” Bain writes. “Powerful but simple APIs enable application
developers to maintain a simplified view of their data and quickly analyze
it without bottlenecks. IMDGs offer the combination of power and ease of use
that applications managing live data need more than ever before.”
Work from home strategies leave many companies in regulatory limbo
A solution for this crucial predicament is a potential temporary regulatory
grace period. Regulatory bodies or lawmakers could establish a window of
opportunity for organizations to self-identify the type and duration of
their non-compliance, what investigations were done to determine that no
harm came to pass, and what steps were, or will be, taken to address the
issue. Currently, the concept of a regulatory grace period is slowly gaining
traction in Washington, but time is of the essence. Middle market companies
are quickly approaching the time when they will have to determine just what
to disclose during these upcoming attestation periods. Companies understand
that mistakes were made, but those issues would not have arisen under normal
circumstances. The COVID-19 pandemic is an unprecedented event that
companies could have never planned for. Business operations and personal
safety initially consumed management’s thought processes as companies
scrambled to keep the lights on. Ultimately, many companies made the right
decisions from a business perspective to keep people working and avoid
suffering a data breach, even in a heightened environment of data security
risks. Any grace period would not absolve the organization of responsibility
for any regulatory exposures.
Quote for the day:
"Our expectation in ourselves must be higher than our expectation in others." -- Victor Manuel Rivera
No comments:
Post a Comment