The Time Travel Method of Debugging Software
By removing the preconceived notions about how challenging programming is, Jason
Laster became more confident in building a developer-friendly debug tool. “We
want to make software more approachable,” he said. “We want more people to feel
like they can program and do things that don’t require a math degree.” He went
on to say, “Imagine being a Project Manager and asking your engineer why
something broke and receiving a long explanation that still leaves your question
unanswered. Using Replay, they can share the URL with the engineers who can just
go in and leave a comment. Now, the PM can recognize the function and identify
what went wrong on their own. If anybody along the way can record the issue with
Replay, then everyone downstream can look at the replay, debug it and see
exactly what went wrong.” Acknowledging that it’s easy to mistake Replay as
another browser recorder tool, Laster explained how Replay differs. “On one end
of the spectrum, you have something like a video recorder, then go along that
spectrum a little bit further and you have something like a session replay tool
and observability tool.
Software AI Accelerators: AI Performance Boost for Free
The increasing diversity of AI workloads has necessitated a business demand for
a variety of AI-optimized hardware architectures. These can be classified into
three main categories: AI-accelerated CPU, AI-accelerated GPU, and dedicated
hardware AI accelerators. We see multiple examples of all three of these
hardware categories in the market today, for example Intel Xeon CPUs with DL
Boost, Apple CPUs with Neural Engine, Nvidia GPUs with tensor cores, Google
TPUs, AWS Inferentia, Habana Gaudi and many others that are under development by
a combination of traditional hardware companies, cloud service providers, and AI
startups. While AI hardware has continued to take tremendous strides, the growth
rate of AI model complexity far outstrips hardware advancements. About three
years ago, a Natural Language AI model like ELMo had ‘just’ 94 million
parameters whereas this year, the largest models reached over 1 trillion
parameters.
Cybersecurity in the digital factory for manufacturers
Many companies are extremely hesitant about introducing the Industrial
Internet of Things (IIoT) or cloud systems because they believe it will open
the door to cybercriminals. What they fail to realize is they’re already
facing this danger every day. A simple email with an attachment or a link can
result in the encryption of all the information on a server. You’re at risk
even if you haven’t implemented an entire ecosystem connecting customers and
suppliers. That’s why it’s essential that you’re aware of the threats and be
ready to respond quickly in the event of a cyberattack. Cybersecurity is
currently on everyone’s lips. In many widely publicized cases, large companies
have fallen victim to cyberattacks that compromised their operations in one
way or another. In some of these cases, the companies’ security policies had
not kept up with the past decade’s rapid changes in the use of digital
technologies and tools. They mistakenly thought a cyberattack could only
affect others. The sheet metal processing sector is no exception to this
reality.
Chaos Engineering and Observability with Visual Metaphors
Monitoring and observability have become one of the most essential
capabilities for engineering teams and in general for modern digital
enterprises who want to deliver excellence in their solutions. Since there are
many reasons to monitor and observe the systems, Google has documented Four
Golden Signals or metrics that define what it means for the system to be
healthy and that are the foundation for the current state of the observability
and monitoring platforms. The four metrics are described below: Latency is the
time that a service takes to service a request. It includes HTTP 500 errors
triggered due to loss of connection to a database or other critical backend
that might not be served very quickly. Latency is a basic metric since a slow
error is even worse than a fast error. Traffic is a measure of how much demand
is being placed on the system. It determines how much stress is the system
taking at a given time from users or transactions running through the service.
For a web service, for example, this measurement is usually HTTP requests per
second.
Reimagining the Post Pandemic Future: Leveraging the benefits of Hyperautomation
As the world emerges from the impact of the pandemic, hyperautomation
solutions will power digital self-services to take center stage connecting
businesses with customers. With customers opening bank accounts remotely,
consulting doctors online, interacting with governments via citizen
self-serve, and so on, the scope of tech-enabled services keeps expanding from
time to time. All this implies that there will be a gradual shift away from
the traditional back-office towards self-serve. From a hyperautomation
standpoint, this shift will see a considerable boost from low-code platforms
with favorable B2C type interactions. Rich and sophisticated user experiences
centered around simplicity and ease of use will be in demand. New user
experiences will break ground allowing more flexibility and improved
speed-to-solution. In addition to B2C type low-code portals, Artificial
Intelligence (AI) and analytics will be in demand. For example, organizations
will deploy AI technologies heavily to assist customer interactions.
UK regulators seek input on algorithmic processing and auditing
On the benefits and harms of algorithms, the DRCF identified “six
cross-cutting focus areas” for its work going forward: transparency of
processing; fairness for those affected; access to information products,
services and rights; resilience of infrastructure and systems; individual
autonomy for informed decision-making; and healthy competition to promote
innovation and better consumer outcomes. On algorithmic auditing, the DRCF
said the stakeholders pointed to a number of issues in the current landscape:
“First, they suggested that there is lack of effective governance in the
auditing ecosystem, including a lack of clarity around the standards that
auditors should be auditing against and around what good auditing and outcomes
look like. “Second, they told us that it was difficult for some auditors, such
as academics or civil society bodies, to access algorithmic systems to
scrutinise them effectively. Third, they highlighted that there were
insufficient avenues for those impacted by algorithmic processing to seek
redress, and that it was important for regulators to ensure action is taken to
remedy harms that have been surfaced by audits.”
Developer experience doesn’t have to stop at the front end
“It is natural to see providers making it easier for developers to do those
things and that is where we get into infrastructure meeting software
development,” RedMonk analyst James Governor told InfoWorld. “At the end of
the day, you need platforms to enable you to be more productive without
manually dealing with Helm charts, operators, or YAML.” Improving the back-end
developer experience can do more than improve the lives of back-end
developers. Providing better, more intuitive tools can enable back-end
developers to get more done, while also bringing down barriers to allow a
wider cohort of developers to manage their own infrastructure through
thoughtful abstractions. “Developer control over infrastructure isn’t an
all-or-nothing proposition,” Gartner analyst Lydia Leong wrote.
“Responsibility can be divided across the application lifecycle, so that you
can get benefits from “you build it, you run it” without necessarily
parachuting your developers into an untamed and unknown wilderness and wishing
them luck in surviving because it’s not an ‘infrastructure and operations team
problem’ anymore.”
As supply chains tighten, logistics must optimize with AI
Before jumping the gun, identify your bottlenecks, understand the delivery
systems available and discover the root cause of the congestion. Factors to
analyze are the capacity of your shipping mediums, your warehouse management,
average delivery time and the accuracy of your demand predictions. Only by
understanding your current capabilities and inefficiencies will you be able to
deploy the appropriate technology. Build your systems in an orderly manner:
Build out your technology step by step. This is vital since some companies
assume that adding multiple solutions and automating everything at once will
reap the best results. This is not the case. ... Overall, applying AI
analytics to problems will help you optimize elements like your optimal
warehouse capacity, transportation utilization and delivery times. At some
point, however, business leaders have to choose between tradeoffs. Is the main
goal to keep costs low or to increase delivery speed? Are long transport
distances to be avoided due to emissions? While AI can show which alternatives
are more cost-effective or climate-friendly, companies will have to make the
ultimate decision about their business trajectory.
SOC modernization: 8 key considerations
When an asset is under attack, security analysts need to understand if it is a
test/development server or a cloud-based workload hosting a business-critical
application. To get this perspective, SOC modernization combines threat,
vulnerability, and business context data for analysts. ... Cisco purchased Kenna
Security for risk-based vulnerability management, Mandiant grabbed Intrigue for
attack surface management, and Palo Alto gobbled up Expanse Networks for ASM as
well. Meanwhile, SIEM leader Splunk provides risk-based alerting to help
analysts prioritize response and remediation actions. SOC modernization makes
this blend a requirement. ... SOC modernization includes a commitment to
constant improvement. This means understanding threat actor behavior, validating
that security defenses can counteract modern attacks, and then reinforcing any
defensive gaps that arise. CISOs are moving toward continuous red teaming and
purple teaming for this very purpose. In this way, SOC modernization will drive
demand for continuous testing and attack path management tools from vendors like
AttackIQ, Cymulate, Randori, SafeBreach, and XMCyber.
Challenging misconceptions around a developer career
Experience counts a lot for developers, just as it does for pilots or surgeons.
Technical experience is relatively easy to pick up, but the experiences that
build instinct in the best developers are rarely gained alone. Developers work
with others and learn from one another along the way. They seek collaboration on
difficult problems and offer thoughtful feedback and suggestions on work in
progress. Ultimately, developer tools are built for collaboration, encouraging
the exchange of comments and open discussion. There are so many misconceptions
about successful developers. Some of them may have some truth to them, while
others are outdated or were completely false in the first place. The idea of
developers as antisocial individuals is not always accurate. Developers are more
often creative problem solvers who combine creativity with deep skills to tackle
the task at hand. The most successful developers combine emotional intelligence
with hard work and a curiosity for learning something new – and they help others
around them to do the same.
Quote for the day:
"Without courage, it doesn't matter
how good the leader's intentions are." -- Orrin Woodward
No comments:
Post a Comment