Showing posts with label robot. Show all posts
Showing posts with label robot. Show all posts

Daily Tech Digest - November 18, 2025


Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous



The rise of the chief trust officer: Where does the CISO fit?

Trust is touted as a differentiator for organizations looking to strengthen customer confidence and find a competitive advantage. Trust cuts across security, privacy, compliance, ethics, customer assurance, and internal culture. For the custodians of trust, that’s a wide-ranging remit without the obvious definition of other C-suite roles. Typically, the CISO continues to own controls and protection, while the CTrO broadens the remit to reputation, ethics, and customer confidence. Where cybersecurity reports to the CTrO, it is a way to escape IT and the competing priorities with the CIO. This partnership repositions security from ‘department of no’ to business enabler, Forrester notes. ... Patel says that strong alignment between customer trust and business strategy is critical. “If you don’t have credibility in the marketplace, with your partners and customers, your business strategy is dead on arrival,” he tells CSO. Whereas CISO’s day-to-day responsibilities include checking on the SOC, reviewing alerts, GRC, managing other security operations and board reporting, the chief trust officer role weaves customer trust throughout, says Patel. “It’s really bringing that trust lens into the decision-making equation and challenging colleagues and partners to think in the same manner.” ... There is also the question of how organizations operationalize trust — and can it be measured? No off-the-shelf platform exists, so CTrOs must build their own dashboards combining customer and employee metrics to track trends and identify early signs of trust erosion.


When Machines Attack Machines: The New Reality of AI Security

Attackers decomposed tasks and distributed them across thousands of instructions fed into multiple Claude instances, masquerading as legitimate security tests and circumventing guardrails. The campaign’s velocity and scale dwarfed what human operators could manage, representing a fundamental leap for automated adversarial capability. Anthropic detected the operation by correlating anomalous session patterns and observing operational persistence achievable only through AI-driven task decomposition at superhuman speeds. Though AI-generated attacks sometimes faltered—hallucinating data, forging credentials, or overstating findings—the impact proved significant enough to trigger immediate global warnings and precipitate major investments in new safeguards. Anthropic concluded that this development brings advanced offensive tradecraft within reach of far less sophisticated actors, marking a turning point in the balance between AI’s promise and peril. ... AI-based offensive operations exploit vulnerabilities across entire ecosystems instantly with the goal of exfiltrating critical intelligence and causing damage to the target. Offensive AI iterates adversarial attacks and novel exploits on a scale human red teams cannot attain. Defenses that work well against traditional techniques often fail outright under continuous, machine-driven attack cycles. 


From chatbots to colleagues: How agentic AI is redefining enterprise automation

According to Flores, agentic AI changes that equation. Each agent has a name, a mission defined by its system prompt, and a connection to company data through retrieval-augmented generation. Many of them also wield tools such as CRMs, databases, or workflow platforms. “An agent is like hiring a new employee who already knows your systems on day one,” Flores said. “It doesn’t just respond — it executes.” This new mode of collaboration also changes how employees interact with technology. Flores noted that his clients often name their agents, treating them as teammates rather than tools. “When marketing needs to check something, they’ll say, ‘Let’s ask Marco,’” he added. “That naming makes adoption easier — it feels human.” ... One of IBM’s first success stories came with password resets — an unglamorous but ubiquitous use case. Two agents now collaborate: one triages the request, while the other verifies credentials and performs the reset, all under the company’s identity-and-access-management system. Each agent has its own digital identity, ensuring audit trails and preventing impersonation. ... Agentic AI isn’t a software upgrade — it’s a redesign of how digital work gets done. Each of the leaders interviewed for this story emphasized that success depends as much on data and governance as on culture and experimentation. Before moving beyond chatbots, IT directors should ask not only “Can we do this?” but “Where should we start — and how do we do it safely?”


What to look for in an AI implementation partner

Good AI implementation partners need not be limited to big professional services firms. Smaller firms such as AI consultancies and startups can provide lots of value. Regardless, many organizations require outside expertise when deploying, monitoring, and maintaining AI tools and services. ... “Many firms understand AI tools at a surface level, but what truly matters is the ability to contextualize AI within the nuances of a specific industry,” says Hrishi Pippadipally, CIO at accounting and business advisory firm Wiss. ... An effective partner must be able to balance innovation with the guardrails of security, privacy, and industry-specific compliance, Agrawal adds. “Otherwise, IT leaders will inherit long-term liabilities,” he says. ... “The mistake many organizations make is focusing only on technical credentials or flashy demos,” Agrawal says. “What’s often overlooked and what I prioritize is whether the partner can embed AI into existing workflows without disrupting business continuity. A good partner knows how to integrate AI so that it doesn’t just work in theory, but delivers impact in the complex reality of enterprise operations.” ... “Most evaluation checklists focus on the technical side — security, compliance, data governance, etc.,” says Sara Gallagher, president of The Persimmon Group, a business management consultancy. “While that matters, too many execs are skipping over the thornier questions.


Magnetic tape is going strong in the age of AI, and it's about to get even better

Aramid permits the manufacture of significantly thinner and smoother media, enabling longer tape lengths in a standard LTO Ultrium cartridge form factor,” the organization noted in a statement. “This material innovation provides an extra 10 TB of native capacity than the currently available 30 TB LTO-10 cartridge, which is manufactured using different materials.” Stephen Bacon, VP for data protection solutions product management at HPE, said the new cartridges are aimed at enterprises spanning an array of industries dealing with high data volumes, from manufacturing to financial services. “AI has turned archives into strategic assets,” Bacon commented. ... Tape storage has a number of distinct advantages, including low cost, durability, and easy portability. According to previous analysis from the LTO Program, companies using tape recorded an 86% lower total cost of ownership (TCO) compared to disk storage. TCO compared to cloud storage was also 66% lower across a 10 year period, figures showed. Notably, the use of tape for unstructured data storage also adds to the appeal, with this now vital in the training process for large language models (LLMs). ... Long-term, tape storage is only going to improve, at least if the LTO Program’s roadmap is to be believed. Through generations 11 through to 14, enterprises can expect to see significant capacity gains, eventually peaking with a 913 TB cartridge.


The rebellion against robot drivel

LLMs are “lousy writers and (most importantly!) they are not you,” Cantrill argues. That “you” is what persuades. We don’t read Steinbeck’s The Grapes of Wrath to find a robotic approximation of what desperation and hurt seem to be; we read it because we find ourselves in the writing. No one needs to be Steinbeck to draft press releases, but if that press release sounds samesy and dull, does it really matter that you did it in 10 seconds with an LLM versus an hour on your own mental steam? A few years ago, a friend in product marketing told me that an LLM generated better sales collateral than the more junior product marketing professionals he’d hired. His verdict was that he would hire fewer people and rely on LLMs for that collateral, which only got a few dozen downloads anyway, from a sales force that numbered in the thousands. Problem solved, right? Wrong. If few people are reading the collateral, it’s likely the collateral isn’t needed in the first place. Using LLMs to save money on creating worthless content doesn’t seem to be the correct conclusion. Ditto using LLMs to write press releases or other marketing content. I’ve said before that the average press release sounds like it was written by a computer (and not a particularly advanced computer), so it’s fine to say we should use LLMs to write such drivel. But isn’t it better to avoid the drivel in the first place? Good PR people think about content and its place in a wider context rather than just mindlessly putting out press releases.


AI’s Impact on Mental Health

“Talking to a therapist can be intimidating, expensive, or complicated to access, and sometimes you need someone—or something—to listen at that exact moment,’’ said Stephanie Lewis, a licensed clinical social worker and executive director of Epiphany Wellness addiction and mental health treatment centers. Chatbots allow people to vent, process their feelings, and get advice without worrying about being judged or misunderstood, Lewis said. “I also see that people who struggle with anxiety, social discomfort, or trust issues sometimes find it easier to open up to a chatbot than a real person.” Users are “often looking for a safe space to express emotions, receive reassurance, or find quick stress-management strategies,’’ added Dr. Bryan Bruno, medical director of Mid City TMS, a New York City-based medical center focused on treating depression. ... “Chatbots created for therapy are often built with input from mental health professionals and integrate evidence-based approaches, like cognitive behavioral therapy techniques,’’ Tse said. “They can prompt reflection and guide users toward actionable steps.” Lewis agreed that some therapeutic chatbots are designed with real therapy techniques, like Cognitive Behavioral Therapy (CBT), which can help manage stress or anxiety. “They can guide users through breathing exercises, mindfulness techniques, and journaling prompts, all great tools,” she said.


Holistic Engineering: Organic Problem Solving for Complex Evolving Systems & Late projects. 

Architectures that drift from their original design. Code that mysteriously evolves into something nobody planned. These persistent problems in software development often stem not from technical failures ... Holistic engineering is the practice of deliberately factoring these non-technical forces into our technical decisions, designs, and strategies. ... Holistic engineering involves considering, during technical design, among the factors, not only traditional technical factors, but also all the other non-technical forces that will be influencing your system anyhow. By acknowledging these forces, teams can view the problem as an organic system and influence, to some extent, various parts of the system. ... Consider the actual information structure within your organization. Understanding actual workflow patterns and communication channels reveals how work truly gets accomplished. These communication patterns often differ significantly from the formal hierarchy. Next, identify which processes could block your progress. For example, some organizations require approval from twenty people, including the CTO, to decide on a release. ... Organizations that embrace holistic engineering gain predictable control over forces that typically derail technical projects. Instead of reacting to "unforeseen" delays and architectural drift, teams can anticipate and plan for organizational constraints that inevitably influence technical outcomes.
At its heart, industrial AI is about automating and optimising business processes to improve decision-making, enhance efficiency and increase profitability. It requires the collection of vast volumes of data from sources like IoT sensors, cameras, and back-office systems, and the application of machine and deep learning algorithms to surface insights. In some cases, the AI powers robots to supercharge automation, and in others, it utilises edge computing for faster, localised processing. Agentic AI helps firms go even further, by working autonomously, dynamically and intelligently to achieve the goals it is set. ... “You get the data in from IoT and you trigger that as an anomaly,” says Pederson. “You analyse the anomaly against all your historic records – other incidents that have happened with customers and how they have been fixed. You relate it to your knowledge base articles. And then you relate it to your inventory on your service vans, like which service vans and which technicians are equipped to do the job. “So it’s the whole estate of structured, unstructured and processed data. In the past, they would send a technician out, and they could get it right 84% of the time. Now they have improved their first-time fix rate to 97%.” Both this and the aforementioned field service deployment feature an “agentic dispatcher” which autonomously creates and publishes the schedules to the relevant service technicians, updates their calendar and suggests the best route to take. “In the very near future, AI agents will not only be helping to address work for people behind a desk, but guiding robots directly,” says Pederson.


What security pros should know about insurance coverage for AI chatbot wiretapping claims

There are subtle differences in the way courts are viewing privacy litigation arising from the use of AI chatbots in comparison to litigation involving analytical tools like session reply or cookies. Both claims involve allegations that a third party is intercepting communications without proper consent, often under state wiretapping laws, but the legal arguments and defenses vary because the data being collected is different. ... Whether or not an exclusion will ultimately impact coverage depends both on the specific language of the exclusion and also the allegations raised in the underlying lawsuit. For example, broadly worded exclusions with “catch-all” phrases precluding coverage for any statutory violation may be more difficult for policyholder to overcome than an exclusion that identifies by name specific statutes. As these claims are relatively new, we have yet to see significant examples of how this plays out in the context of insurance coverage litigation. However, we saw similar coverage arguments in the context of insurance coverage litigation where the underlying suit alleged violations of the Biometric Information Privacy Act (BIPA). ... To help mitigate risks, organizations should review their user consent mechanisms for AI Bot Communications. Consent does not always mean signing a form, but could include prominently displaying chatbot privacy notices before any data collection, providing easy access to the business’s privacy policy detailing how chatbot interactions are stored, and using automated disclaimers at the start of each chat session. 

Daily Tech Digest - January 23, 2024

How human robot collaboration will affect the manufacturing industry

Traditional manufacturing systems frequently struggle to adjust to shifting demands and product variances. Human-robot collaboration gives flexibility, which is critical in today’s market. Robots are easily programmed and reprogrammed, allowing firms to quickly alter production lines to suit new goods or design changes. This adaptability is critical in an era where customer preferences shift quickly, and companies are trying to work in line with the shifting preferences of the customers. ... While the initial investment in robotics technology may be significant, the long-term cost savings from human-robot collaboration are attractive. Automated procedures in the manufacturing industries lower labor costs, boost productivity, and reduce errors to a great extent, resulting in a more cost-effective manufacturing operation. ... There is a notion that automation will replace human occupations, on the contrary, the collaboration is intended to supplement human abilities. Human workers may focus on critical thinking, problem-solving, and creativity by automating mundane and physically demanding jobs.


Mastering System Design: A Comprehensive Guide to System Scaling for Millions

Horizontal scaling emerges as a strategic solution to accommodate increasing demands and ensure the system’s ability to handle a burgeoning user base. Horizontal scaling involves adding more servers to the system and distributing the workload across multiple machines. Unlike vertical scaling, which involves enhancing the capabilities of a single server, horizontal scaling focuses on expanding the server infrastructure horizontally. One of the key advantages of horizontal scaling is its potential to improve system performance and responsiveness. By distributing the workload across multiple servers, the overall processing capacity increases, alleviating performance bottlenecks and enhancing the user experience. Moreover, horizontal scaling offers improved fault tolerance and reliability. The redundancy introduced by multiple servers reduces the risk of a single point of failure. In the event of hardware issues or maintenance requirements, traffic can be seamlessly redirected to other available servers, minimizing downtime and ensuring continuous service availability. Scalability becomes more flexible with horizontal scaling. 


Backup admins must consider GenAI legal issues -- eventually

LLMs requiring a massive amount of data and, by proxy, dipping into nebulous legal territory is inherent to GenAI services contracts, said Andy Thurai, an analyst at Constellation Research. Many GenAI vendors are now offering indemnity or other legal protections for customers. ... "It's a [legal] can of worms that enterprises can't afford to open," Thurai said. Unfortunately for enterprise legal teams, the need to create guidance is fast approaching. Lawsuits by organizations such as the New York Times are looking to take back IP control and copyright from the OpenAI's proprietary and commercial LLM model. Those suits are entirely focused on the contents of data itself rather than the mechanics of backup and storage that backup admins would concern themselves with, said Mauricio Uribe, chair of the software/IT and electrical practice groups at law firm Knobbe Martens. The business advantages of GenAI within backup technology are still unproven and unknown, he added. Risks such as patent infringement remain a possibility. Backup vendors are implementing GenAI capabilities such as support chatbots into their tools now, such as Rubrik's Ruby and Cohesity's Turing AI. But neither incorporates enterprise customer data or specific customer information, according to both vendors.


CFOs urged to reassess privacy budgets amid rising data privacy concerns

The ISACA Privacy in Practice 2024 survey report reveals that only 34% of organizations find it easy to understand their privacy obligations. This lack of clarity can lead to non-compliance and increased risk of data breaches. Additionally, only 43% of organizations are very or completely confident in their privacy team’s ability to ensure data privacy and achieve compliance with new privacy laws and regulations. ... To address the challenges outlined in the survey, organizations are taking proactive steps to strengthen their privacy programs. Training plays a crucial role in mitigating workforce gaps and privacy failures. Half of the respondents (50%) note that they are training non-privacy staff to move into privacy roles, while 39% are increasing the usage of contract employees or outside consultants. Organizations are also investing in privacy awareness training for employees. According to the survey, 86% of organizations provide privacy awareness training, with 66% offering training to all employees annually. Moreover, 52% of respondents provide privacy awareness training to new hires. 


Cisco sees headway in quantum networking, but advances are slow

Cisco has said that it envisions quantum data centers that could use classic LAN models to tie together quantum computers, or a quantum-based network that transmits quantum bits (qubits) from quantum servers at high-speeds to handle commercial-grade applications. “Another trend will be the growing importance of quantum networking which in 4 or 5 years – perhaps more – will enable quantum computers to communicate and collaborate for more scalable quantum solutions,” Centoni stated. “Quantum networking will leverage quantum phenomena such as entanglement and superposition to transmit information.” The current path for quantum researchers and developers is to continue to grow radix, expand mesh networking (the ability for network fabrics to support many more connections per port and higher bandwidth), and create quantum switching and repeaters, Pandey said. “We want to be able to carry quantum signals over longer distances, because quantum signals deteriorate rapidly,” he said. “We definitely want to enable them to handle those signals within a data center footprint, and that’s technology we will start experimenting on.”


Navigating the Digital Transformation: The Role of IT

While many acknowledged engaging with the six core elements of the Rewired framework, few participants considered themselves frontrunners in significant progress. This underscores the complexity and ongoing nature of digital transformation, necessitating continuous adaptation across leadership, culture, and technology. Organizations are directing efforts towards both front-end (customer experience) and back-end (operational optimization), recognizing the interconnected nature of digital transformation. Success stories include consolidating Robotic Process Automation (RPE), Artificial Intelligence (AI), and low-code development within a single organizational department. This integration facilitates synergies and holistic advancements in digital capabilities. The evolving nature of ERP transformations was also discussed, with a shift towards continuous improvements and a focus on operating models and ways of working, moving beyond purely technological considerations. The insights from this roundtable underscore the multifaceted nature of digital transformation.


Harvard Scientists Discover Surprising Hidden Catalyst in Human Brain Evolution

“Brain tissue is metabolically expensive,” said the Human Evolutionary Biology assistant professor. “It requires a lot of calories to keep it running, and in most animals, having enough energy just to survive is a constant problem.” For larger-brained Australopiths to survive, therefore, something must have changed in their diet. Theories put forward have included changes in what these human ancestors consumed or, most popularly, that the discovery of cooking allowed them to garner more usable calories from whatever they ate. ... The shift was probably a happy accident. “This was not necessarily an intentional endeavor,” Hecht posited. “It may have been an accidental side effect of caching food. And maybe, over time, traditions or superstitions could have led to practices that promoted fermentation or made fermentation more stable or more reliable.” This hypothesis is supported by the fact that the human large intestine is proportionally smaller than that of other primates, suggesting that we adapted to food that was already broken down by the chemical process of fermentation. 


Digital Personal Data Protection Act marks a new era of business-friendly governance

Surprising the business community, the DPDP Act 2023 removed the data localization requirements, marking a significant departure from the previous iterations of the Act. The earlier DPDP Bills required certain categories of personal data to be stored and processed within the country. The provision faced staunch global opposition, particularly from the US, which criticized India's requirements as discriminatory and trade distortive. In contrast, the DPDP Act, 2023 adopts a more inclusive approach, granting firms autonomy in the choice and location of cloud services for storing and processing personal data of their users. By prioritizing cost-effectiveness and competitiveness for the firms, the removal of data localisation requirements signals a more accommodating government stance. In addition to scrapping data localization requirements, the DPDP Act 2023 also allows unrestricted cross-border transfer of Indian users’ personal data abroad, barring certain destination countries. Firms would not be required to conduct post-transfer impact assessments or to ensure that the destination country has similar data protection standards– mandated in other jurisdictions like the EU and Vietnam. 


Cybersecurity: The growing partnership between HR and risk management

HR professionals themselves can also be attractive targets to bad actors. The access they have to sensitive employee and company data can be a goldmine for hackers, putting a target on the back of those within the HR organization. As such, HR leaders should put proactive, pre-breach policies in place for their own functional colleagues. Policies might include contacting internal and external parties who ask for changes to sensitive information, such as invoice numbers, email passwords, direct deposit details, and software updates. They should also include policies for remote workers and incidence response. ... When you purchase cyber insurance, you get access to pre-breach planning and policy templates, which for many organizations, is just as important as the breach coverage. While the optimal amount of insurance depends on many factors — including size, revenues, number of employees and access to confidential information — HR organizations of all sizes and structures benefit from pre-breach planning and policymaking.


IT services spending signals major role change for CIOs ahead

“This evolution in what CIOs do, the value proposition they bring to the company, is evident in the long-term playout. But it is not yet as evident to the CIOs themselves,” Lovelock said. He sees CIOs still thinking they are riding the same talent waves of the past, facing a temporary problem that they will solve: that their staff will come back, that hiring will resume, that attrition rates will decline, and that they will be able to attract the skills they need at prices they can afford. “It doesn’t look like they will ever be able to do that. There are too many things IT staff with these key resources and skills are looking for that are outside of the CIO’s control to deliver,” he said. With increasing reliance on IT services and consulting to deliver outcomes ranging from commoditized customer support to differentiating generative AI implementations, the CIO role may soon become less about being that one-stop shop for business support, overseeing project and products developed in-house, and more about weaving together myriad services undertaken by an increasingly heterogeneous mix of talent sources, predominantly beyond the CIO’s direct purview.



Quote for the day:

''Thinking is easy acting is difficult, and to put one's thoughts into action is the most difficult thing in the world.'' -- Johann Wolfgang von Goethe

Daily Tech Digest - June 07, 2023

The Design Patterns for Distributed Systems Handbook

Some people mistake distributed systems for microservices. And it's true – microservices are a distributed system. But distributed systems do not always follow the microservice architecture. So with that in mind, let's come up with a proper definition for distributed systems: A distributed system is a computing environment in which various components are spread across multiple computers (or other computing devices) on a network. ... If you decide that you do need a distributed system, then there are some common challenges you will face:Heterogeneity – Distributed systems allow us to use a wide range of different technologies. The problem lies in how we keep consistent communication between all the different services. Thus it is important to have common standards agreed upon and adopted to streamline the process. Scalability – Scaling is no easy task. There are many factors to keep in mind such as size, geography, and administration. There are many edge cases, each with their own pros and cons. Openness – Distributed systems are considered open if they can be extended and redeveloped.


Shadow IT is increasing and so are the associated security risks

Gartner found that business technologists, those business unit employees who create and bring in new technologies, are 1.8 times more likely than other employees to behave insecurely across all behaviors. “Cloud has made it very easy for everyone to get the tools they want but the really bad thing is there is no security review, so it’s creating an extraordinary risk to most businesses, and many don’t even know it’s happening,” says Candy Alexander, CISO at NeuEon and president of Information Systems Security Association (ISSA) International. To minimize the risks of shadow IT, CISOs need to first understand the scope of the situation within their enterprise. “You have to be aware of how much it has spread in your company,” says Pierre-Martin Tardif, a cybersecurity professor at Université de Sherbrooke and a member of the Emerging Trends Working Group with the professional IT governance association ISACA. Technologies such as SaaS management tools, data loss prevention solutions, and scanning capabilities all help identify unsanctioned applications and devices within the enterprise.


Worker v bot: Humans are winning for now

Ethical and legislative concerns aside, what the average worker wants to know is if they’ll still have a job in a few years’ time. It’s not a new concern: in fact, jobs are lost to technological advancements all the time. A century ago, most of the world’s population was employed in farming, for example. Professional services company Accenture asserts that 40% of all working hours could be impacted by generative AI tools — primarily because language tasks already account for just under two thirds of the total time employees work. In The World Economic Forum’s (WEF) Future of Jobs Report 2023, jobs such as clerical or secretarial roles, including bank tellers and data entry clerks, are reported as likely to decline. Some legal roles, like paralegals and legal assistants, may also be affected, according to a recent Goldman Sachs report. ... Customer service roles are also increasingly being replaced by chatbots. While chatbots can be helpful in automating customer service scenarios, not everyone is convinced. Sales-as-a-Service company Feel offers, among other services, actual live sales reps to chat with online shoppers.


The Future of Continuous Testing in CI/CD

Continuous testing is rapidly evolving to meet the needs of modern software development practices, with new trends emerging to address the challenges development teams face. Three key trends currently gaining traction in continuous testing are cloud-based testing, shift-left testing and security testing. These trends are driven by the need to increase efficiency and speed in software development while ensuring the highest quality and security levels. Let’s take a closer look at these trends. Cloud-Based Testing: Continuous testing is deployed through cloud-based computing, which provides multiple benefits like ease of deployment, mobile accessibility and quick setup time. Businesses are now adopting cloud-based services due to their availability, flexibility and cost-effectiveness. Cloud-based testing doesn’t require coding skills or setup time, which makes it a popular choice for businesses. ... Shift-Left Testing: Shift-left testing is software testing that involves testing earlier in the development cycle rather than waiting until later stages, such as system or acceptance testing.


IT is driving new enterprise sustainability efforts

There’s an additional sustainability benefit to modernizing applications, says Patel at Capgemini. “Certain applications are written in a way that consumes more energy.” Digital assessments can help measure the carbon footprint of internally developed apps, she says. Modern application design is key to using the cloud efficiently. At Choice Hotels, many components now run as services that can be configured to automatically shut down during off hours. “Some run as micro processes when called. We’re using serverless technologies and spot instances in the AWS world, which are more efficient, and we’re building systems that can handle it when those disappear,” Kirkland says. “Every digital interaction has a carbon price, so figure out how to streamline that,” advises Patel. This includes business process reengineering, as well as addressing data storage and retention policies. For example, Capgemini engages employees in sustainable IT by holding regular “digital cleaning days” that include deleting or archiving email messages and cleaning up collaborative workspaces.


SRE vs. DevOps? Successful Platform Engineering Needs Both

The complexity of managing today’s cloud native applications drains DevOps teams. Building and operating modern applications requires significant amounts of infrastructure and an entire portfolio of diverse tools. When individual developers or teams choose to use different tools and processes to work on an application, this tooling inconsistency and incompatibility causes delays and errors. To overcome this, platform engineering teams provide a standardized set of tools and infrastructure that all project developers can use to build and deploy the app more easily. Additionally, scaling applications is difficult and time-consuming, especially when traffic and usage patterns change over time. Platform engineering teams address this with their golden paths — or environments designed to scale quickly and easily — and logical application configuration. Platform engineering also helps with reliability. Development teams that use a set of shared tools and infrastructure tested for interoperability and designed for reliability and availability make more reliable software.


Zero Trust Model: The Best Way to Build a Robust Data Backup Strategy

A zero trust model changes your primary security principle from the age-old axiom “trust but verify” to “never trust; always verify.” Zero trust is a security concept that assumes any user, device, or application seeking access to a network is not to be automatically trusted, even if it is within the network perimeter. Instead, zero trust requires verification of every request for access, using a variety of security technologies and techniques such as multifactor authentication (MFA), least-privilege access, and continuous monitoring. A zero trust environment provides many benefits, though it is not without its flaws. Trust brokers are the central component of zero trust architecture. They authenticate users’ credentials and provide access to all other applications and services, which means they have the potential to become a single point of failure. Additionally, some multifactor authentication processes might cause users to wait a few minutes before allowing them to login, which can hinder employee productivity. The location of trust brokers can also create latency issues for users. 


How to Manage Data as a Product

The way most organizations go about managing data is out of step with the way people want to use data, says Wim Stoop, senior director of product marketing at Cloudera. “If you want to get your teeth fixed or your appendix out you go to an expert rather than a generalist,” he says. “The same should apply to the data that people in organizations need.” However, most enterprises treat data as a centralized and protected asset. It’s locked up in production applications, data warehouses, and data lakes that are administered by a small cadre of technical specialists. Access is tightly controlled, and few people are aware of data the organization possesses outside of their immediate purview. The drive towards organization agility has helped fuel interest in the data mesh. “Individual teams that are responsible for data can iterate faster in a well-defined construct,” Stoop says. “The shift to treating data as a product breaks down siloes and gives data longevity because it’s clearly defined, supported and maintained by the employees that know it intimately.”


Preparing for the Worst: Essential IT Crisis Preparation Steps

Crisis preparation begins with planning -- outlining the steps that must be taken in the event of a crisis, as well as procedures for data backup and recovery, network security, communication with stakeholders, and employee safety, says O’Brien, who founded the founded the Yale Law School Privacy Lab. “Every organization should conduct regular drills and simulations to test the effectiveness of their plan,” he adds. Every enterprise should appoint an overall crisis management coordinator, an individual responsible for ensuring that there’s a coordinated, updated, and rehearsed crisis management plan, Glair advises. He also recommends creating a crisis management chain of authority that’s ready to jump into action as soon as a crisis event occurs. The crisis management coordinator may report directly to any of several enterprise departments, including risk management, legal, operations, or even the CIO or CFO. “The reporting location is not as important as the authority the coordinator is granted to prepare and manage the crisis management strategy,” he says.


How to make developers love security

Developers hate being slowed down or interrupted. Unfortunately, legacy security testing systems often have long feedback loops that negatively impact developer velocity. Whether it’s complex automated scans or asking the security team to complete manual reviews, these activities are a source of friction. They increase the delay between making a change and verifying its effect. Security suites with many different tools can result in context switching and multi-step mitigations. Additionally, tools aren’t always equipped to find problems in older code, either. Only scanning the new changes in your pipeline maximizes performance, but this can allow oversights to occur as more vulnerabilities become known. Similarly, developers have to refamiliarize themselves with old work whenever a vulnerability impacts it. This is a cognitive burden that further increases the fix’s overall time and effort. All too often, these problems add up to an inefficient security model that prevents timely patches and consumes developers’ productive hours. 



Quote for the day:

"Incompetence annoys me. Overconfidence terrifies me." -- Malcolm Gladwell

Daily Tech Digest - September 24, 2022

Tackling Developer Onboarding Complexity

A common thread in onboarding, and more broadly on reducing developer cognitive load, is the concept of “golden paths” or “paved paths.” Ultimately, the idea is to reduce complexity and get to the bare bones of what needs to be learned or done to increase developer velocity and safety. Mostly, once the cultural aspects of onboarding are covered, this comes back to the “golden path” platform created for developers, which includes the tools and processes that are proven to work but aren’t handcuffs. Once a developer knows how to walk, for example, platforms should be flexible enough to let them run. Humanitec’s CEO, Kaspar von Grünberg, said, “Perhaps more important than fancy golden paths is to agree on the lowest common tech denominator to empower developers to work faster. Why run ultra-complex things if there is an alternative? It is like taking a tractor to do your grocery shopping, which is not productive. If you scatter things all over the place, you are not getting the effects of scale, and the tools you bring in are not delivering ROI. This is why I advocate for the value of standardization. Standardization forms the lowest common tech denominators, clearing the way for individual freedom where needed.”


How devops in the cloud breaks down

First is the obvious issue: talent. To do devops in the cloud, you need devops engineers who understand how to build and use toolchains. More important, you need engineers who know how to build toolchains using cloud-based tools. Some (but not many) people out there have these skills. I see many companies fail to find them and even pull back devops to traditional platforms just so they can staff up. Sadly, that’s not a bad strategy right now. Second, the cloud rarely has all the tools you’ll need for most devops toolchains. Although we have a tremendous number of devops tools, either sold by the public cloud providers or by key partners that sell devops cloud services, about 10% to 20% of the tools you’ll need don’t exist on your public cloud platform. You will have to incorporate another provider’s platform, which then leads to multicloud complexity. Of course, the need for those absent tools depends on the type of application you’re building. This shortage is not as much of a problem as it once was because devops tool providers saw the cloud computing writing on the wall and quickly filled in the tool shortages. 


Tesla is set to introduce its prime 'Optimus' robot

"Autopilot/AI team is also working on Optimus and (actually smart) summon/autopark, which have end of month deadlines," Musk wrote while responding to a Tesla fan club account on Twitter. Musk's Texas-based company is reportedly considering ambitious plans to use thousands of humanoid robots within its factories before eventually extending to millions globally, per a job posting. According to Musk, who is now promoting a vision for the company that extends far beyond producing self-driving electric cars, the robot industry may eventually be worth more than Tesla's automobile income. A source familiar with the situation claimed that as Tesla holds more internal discussions on robotics, the buzz is growing within the organization. ... For Tesla to be successful, it will have to display robots performing various spontaneous acts. Such evidence might help Tesla stock, which is currently down 25 percent from its 2021 peak, according to Nancy Cooke, a professor of human systems engineering at Arizona State University.


Researchers Say It'll Be Impossible to Control a Super-Intelligent AI

Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits. "A super-intelligence poses a fundamentally different problem than those typically studied under the banner of 'robot ethics'," wrote the researchers. "This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable." Part of the team's reasoning came from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written.


The Mutating Cyber Threat

Although each best practice is important, having a programmatic approach is essential for success, Kaun said. “Too many organizations look at security as a list of individual tasks such as perimeter protection and patching, but in reality they all have to work together.” As best practices mature and become part of corporate culture, and as people become educated and equipped to apply those best practices, true change and improved security begins to evolve. “A common adage in security is ‘people, processes, and technology,’ Cusimano noted. “Two of those involve people because people have to adhere to the processes.” The human element is the ultimate toolset, including awareness, collaboration, support, and maintenance. “A proper security program is properly educated and equipped people applying best practice policy and procedures, aided by technology,” Kaun said. “While the right technology will accelerate the effort, if you do not have the global view, the appropriate people, and contextual data to act upon, you will struggle.” Establishing that culture is critical but won’t happen overnight, Cusimano said. He recalled the transition to a safety-first culture in many manufacturing plants.


MIT and Databricks Report Finds Data Management Key to Scaling AI

“Data issues are more likely than not to be the reason if companies fail to achieve their AI goals, according to more than two-thirds of the technology executives we surveyed,” says Francesca Fanshawe, editorial director for MIT Technology Review and editor of the report. “Improving processing speeds, governance, and quality of data, as well as its sufficiency for models, are the main data imperatives to ensure AI can be scaled.” Data security is also a priority with leaders revealing they plan to increase spending on security improvement by an average of 101% over the next three years. The leader group also plans to invest 85% more in the same period on data governance, 69% more on new data and AI platforms, and 63% more on existing platforms. The report lists a few attributes of successful data and AI technology foundations, including a democratization of data to involve a greater number of data literate employees who can configure and improve AI algorithms. Openness is another attribute, with open standards and data formats allowing organizations to source data, insights, and tools externally to facilitate collaboration


Responsible AI, Blockchain in Safe and Ethical AI

Artificial Intelligence (AI) is a broad field that includes machine learning and cognitive computing where computers are programmed to mimic cognitive functions such as learning and problem solving many times faster and more accurately than a human. AI or its subset Computational intelligence, when combined with blockchain systems, can create more robust cryptographic functionality and ciphers thereby making it more difficult for cyber hackers to compromise systems. When blockchain participants have increased control over their data, they have the potential to decide with which parties and for what purposes their data are shared. To collect participant data for use in an AI dataset, participant permissions will need to be obtained.  ... The decentralized characteristics of smart blockchains can effectively help smart grids realize the transformation from centralization to distribution. The decentralization of smart blockchain breaks information barriers and realizes secure data sharing among multiple participants.


Worried about quiet quitting? These Dos and Don'ts could stop it becoming a problem

To understand the risk of quiet quitting in current employees, keep in touch with former employees and find out what made them leave the company. Their insight can help you improve culture for current employees and reduce further resignations. Deal suggests conducting thorough exit interviews with employees who leave the company and reaching out six months later to assess their experience at their new job if they have one. This six-month communication opportunity can be the route back to the former workplace for some employees. If an employee expresses dissatisfaction at their new job and an interest in returning to your company, see what you can do for them. Employees who left your company on good terms, and later want to return to their old jobs, are called boomerang employees, and they can be very beneficial to your company. ... But beware: some employees may hesitate to ask for their old jobs back. They might fear a response from former colleagues who were unhappy at their departure, or they might be concerned about an employee they didn't like who is still in the business. But if you're lucky, this is an opportunity to have excellent talent return to your company.


DevOps Is Dead. Embrace Platform Engineering

Developers don’t want to do operations anymore, and that’s a bad sign for DevOps, at least according to this article by Scott Carey and this Twitter thread by Sid Palas. ... When developers in teams don’t agree on the extent to which they should, or can, do operations tasks, forcing everyone to do DevOps in a one-size-fits-all way has disastrous consequences. The primary consequence is the increasing cognitive load put on developers. This has forced many teams to reconsider how they balance the freedom that comes from developer self-service with mitigating cognitive load through abstraction. Both are necessary: Self-service capabilities are essential to moving quickly and efficiently. ... Platform engineering uses a product approach to enable the right amount of developer self-service and find the right level of abstraction for individual organizations and teams. Successful platform teams combine user research, regular feedback and marketing best practices to understand their developers, create a platform that solves common problems and get internal buy-in from key stakeholders.


SEO poisoning campaign directs search engine visitors from multiple industries to JS malware

Deepwatch came across the campaign while investigating an incident at a customer where one of the employees searched for “transition services agreement” on Google and ended up on a website that presented them with what appeared to be a forum thread where one of the users shared a link to a zip archive. The zip archive contained a file called "Accounting for transition services agreement" with a .js (JavaScript) extension that was a variant of Gootloader, a malware downloader known in the past to deliver a remote access Trojan called Gootkit but also various other malware payloads. Transition services agreements (TSAs) are commonly used during mergers and acquisitions to facilitate the transition of a part of an organization following a sale. Since they are frequently used, many resources are likely available for them. The fact that the user saw and clicked on this link suggests it was displayed high in ranking. When looking at the site hosting the malware delivery page, the researchers realized it was a sports streaming distribution site that based on its content was likely legitimate. 



Quote for the day:

"Open Leadership: the act of engaging others to influence and execute a coordinated and harmonious conclusion." -- Dan Pontefract

Daily Tech Digest - July 13, 2020

How to choose a robot for your company

There are lots of reasons a company might entertain automating processes with robots. According to Kern, the main reason is a labor shortage. Prior to COVID-19-related slowdowns, a competitive labor landscape and rising costs of living in many countries around the globe made hiring tough for skilled and unskilled positions alike. Automation, which often promises ROI efficiencies over time, particularly when it comes to repeatable tasks, is an attractive solution. "Robots can save money over time, not just by directly eliminating human labor, but by cutting out worker training and turnover," according to the Lux report for which Kern served as lead. "Most companies turn to automation and robotic solutions to deal with labor shortages, which is common in industries with repetitive tasks that have a high employee turnover rate. Companies also frequently use robots to automate dangerous tasks, keeping their employees out of harm's way." Post-COVID-19, there are also considerations like sanitation and worker volatility. As I've written, the perception of automation is changing almost overnight. Where robots were once, very recently, associated primarily with lost jobs, there's been a new spin in the industry to tout automation solutions as commonsense in a world where workers are risking infection when they show up at physical locations.


How the cloud fractures application delivery infrastructure ops

The traditional infrastructure team still operates ADCs and load balancers in the data center, while preferring the vendors they have worked with in the past. DevOps and CloudOps have taken control in the public cloud, choosing to use software and cloud provider services that are more integrated with their DevOps toolchains. This fractured operations model is problematic. Companies with divided Layer 4-7 operations are less likely to be successful with this infrastructure. EMA research participants also revealed why they feel a need to close this operational gap. First, 43% of enterprises said this situation has introduced security risks. In most enterprises, application delivery infrastructure is an important component of overall security architecture. Companies need to take a unified approach to network security. Research participants identified compliance problems (36%) and operational efficiency (36%) as the top secondary challenges associated with fractured operations. And 30% said platform problems -- such as issues with scale, performance, functionality or stability -- are a major challenge.


The enormous opportunity in fintech

Technology providers to specific areas of finance have created significant businesses. Across the insurance ecosystem, Guidewire, Applied Systems, and Vertafore capture $10 billion of value. BlackKnight, the leading analytics provider to the mortgage industry, is an $11 billion business. Are you thinking about managing financial documents for your public company? You may turn to Broadridge, which makes a pretty penny in this business, boasting a $13 billion market cap. While these are massive markets, it is not easy to disrupt incumbents. A combination of regulatory hurdles, entrenched behavior, low risk-tolerance, and the benefits of larger balance sheets have kept upstarts at bay for decades. However, as venture capital supports the ecosystem, modern technology creeps into the sector (cloud, APIs), connectivity and data exchanges improve, and consumers grow tired of incumbents, the tide continues to shift. This shift and the challenge to the status quo by fintech upstarts will have lasting effects. Even when incumbents acquire their biggest disruptors, such as Visa’s acquisition of Plaid, innovations pioneered by those startups become integrated into the system and help move the industry forward.


Somehow, Microsoft is the best thing to happen to Chrome

What strange times we live in. Who’d have thought that I’d be writing an article on how Microsoft is the best thing to happen to Google Chrome? A few years ago the idea of Microsoft getting involved in an open source project would cause a mixture of laughter and dread. You know… Microsoft, the foe of open source who had a CEO that once said that Linux was “a cancer that attaches itself in an intellectual property sense to everything it touches.” The company that couldn’t make a decent web browser to save its life. But, believe it or not, I really do think that Microsoft’s involvement has made Chrome a much better browser. ... Basically, since dropping its opposition to open source, and not only embracing it, but putting its money where its mouth is, the thought of Microsoft being involved with an open source project is no longer the stuff of nightmares. It’s proved to be a valuable contributor to the open source community already. But how does this affect Google’s Chrome browser? Well, ever since Microsoft stopped using its own web engine, EdgeHTML, for its Edge web browser, and instead built a brand-new version that’s based on Chromium, it’s been contributing a steady stream of fixes and new features to Chromium – and those have not just been benefitting Edge, but Chrome as well.


IBM just changed the automation game. Hello Extreme Automation

The technology provides a low code, cloud-based authoring experience for the business user to create bot scripts with a desktop recorder, without the need of IT. These scripts are executed by digital robots to complete tasks. Digital robots can run on-demand by the end-user or by an automated scheduler. Arguably, WDG is on a par with Softomotive – acquired by Microsoft for considerably more money. What is clear is these RPA firms are offering pretty much the same functionality for the basic scripting and recording.  WDG is focused heavily on quality customer service ops and is great at integrating with chatbots, digital associates and other AI tools. Pre-Covid, most RPA was focused on low-risk back-office processes, especially in finance. Now customers are desperate to automate the customer-facing and revenue-generating processes and need tools proven to work in the environments. Noone has a huge advantage in the CX automation space so this provides a greenfield opportunity for IBM. The WDG automation software sits under IBM Cognitive and Cloud giving it a broader playing field to compete with the likes of MSFT, Pega, Appian, and even ServiceNow. Arguably, this is the real play that excites IBM’s top brass.


The Importance of Domain Experience in Data Science

Restated — domain knowledge is the learned skill to communicate fluently in a group’s data dialect. Its component parts are: general business acumen + vertical knowledge + data lineage understanding. For example, a data scientist in people analytics requires a foundational knowledge of the business + human resources + the inner-workings of their company’s HR tools and processes which create the data they work with. Those processes and other inputs to the dataset are crucial. A data scientist can’t create meaningful insights before they understand what the data is saying today. Is it telling a story? Is it, or subsets of it, too polluted to use today? Are some data points proxies for or inputs to others? The more complex your business processes and associated data lineage, the longer your data dialect will take to learn. For digital native companies whose data collection is automated with intuitive dialects (i.e. a “click” is a “click”), domain knowledge can be developed much more quickly than for large, longstanding companies which have undergone transformations, acquisitions and/or divestitures. If you hire a data scientist, how long will it take them to learn your data dialect? And can you provide air cover for them to do so before applying pressure to produce “insights?”


Hiring developers: While coding is important, there are other things to consider

A recruiter can learn a lot about the candidate in that half hour, including any side projects they might be involved in or games they've written. These "are often a window into a developer's willingness to take initiative," Volodarsky said. Learning what a developer does in their spare time can also provide great insight into their personality, he said. "Hiring great coders is important, but you also want to collaborate with interesting people, too." When it comes to hiring freelance developers it's important that they understand both the code and the nuances of the business they're contracting for, and this will come through in that conversation over a falafel, or the like, he said. In terms of motivating factors, not surprisingly, an overwhelming 70% said they were looking for better compensation, while 58.5% said they want to work with new technologies, and 57% said they were curious about other opportunities. Close to 70% of respondents said they learn about a company during a job hunt by turning to reviews on third-party sites such as Glassdoor and Blind. However, a large number also said they learned from viewing company-sponsored media, such as blogs and company culture videos.


Is Singapore ready to govern a digital population?

Singapore over the past several years has invested significant resources towards becoming a digital economy, rolling out an ambitious smart nation roadmap, driving the adoption of emerging technologies, and overhauling its own ICT infrastructure. With the global pandemic now adding new impetus to digital transformation, the government has made a concerted effort to drive digital adoption deeper into the business community and local population. It established a new office to work alongside the business community and local population to push the "national digitalisation movement". Initiatives would include the deployment of 1,000 "digital ambassadors" to help stallholders and seniors go digital and setting up of 50 digital community hubs across the island to offer one-to-one assistance on digital skills. A new ministerial committee will also coordinate the country's digitalisation efforts and focus on priorities such as assisting people in learning new skills and galvanising small businesses to go digital. More funds and resources have been further directed to facilitate digital transformation initiatives.


AIOps tools expand as users warm slowly to autoremediation

AIOps has generated industry hype since 2017, as advances in machine learning algorithms prompted IT monitoring vendors to envision a new method of automation for their products. At the same time, complex microservices infrastructures became impossible to manage entirely by human hands alone. Since then, AIOps tools have grown more sophisticated, adding automated remediation features to event correlation and automated root cause analysis, and AIOps vendors that began in specialized areas have also broadened the workloads their tools can support. Most recently, those vendors include Epsagon, which emerged in 2018 with AI-supported distributed tracing for serverless environments and expanded in 2019 to include container and cloud workloads. It now offers AIOps features it calls Applied Observability, which automate menial incident resolution tasks in response to metrics and logs in addition to traces. Last month, Epsagon launched a partnership with Microsoft centered on Kubernetes environments after previously inking a deal with AWS focused on its Lambda serverless compute service.


How Microfrontends Can Help to Focus on Business Needs

The concept of building sites from small web applications integrated via hyperlinks is (still) very common. There have also been a lot of concepts of rendering pages from smaller, independent building blocks in the past, such as Java Portlets. Even if the term microfrontend nowadays is used to refer to modern JavaScript apps, there are multiple possible approaches. So, when I use it in this article I refer to an application that: is basically a JavaScript Rich Client (for example a SPA or a Web Component) that runs isolated within an arbitrary DOM node and is as small and performant as possible; does not install global libraries, fonts, or styles; does not assume anything about the site it is embedded in; especially it does not assume any existing paths, so all the base paths to assets and APIs must be configurable; has a well-defined interface consisting of the startup configuration and some runtime messages (events); should be instantiable; ideally inherits the shared styles from the site and ships only styles absolutely necessary to define its layout.



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg