Showing posts with label data intelligence. Show all posts
Showing posts with label data intelligence. Show all posts

Daily Tech Digest - December 30, 2025


Quote for the day:

“It is never too late to be what you might have been.” -- George Eliot


Cybersecurity Trends: What's in Store for Defenders in 2026?

For hackers of all stripes, a ready supply of easily procured, useful tools abounds. Numerous breaches trace to information stealing malware, which grabs credentials from a system, or log. Automated "clouds of logs" make it easy for info stealer subscribers to monetize their attacks. ... Clop, aka Cl0p, again stole data and held it for ransom. How many victims paid a ransom isn't known, although the group's repeated ability to pay for zero-days suggests it's making a tidy profit. Other cybercrime groups appear to have learned from Clop's successes, including The Com cybercrime collective spinoff lately calling itself Scattered Lapsus$ Hunters. One repeat target of that group has been third-party software that connects to customer relationship management software platform Salesforce, allowing them to steal OAuth tokens and gain access to Salesforce instances and customer data. ... Beyond the massive potential illicit revenue being earned by these teenagers, what's also notable is the sheer brutality of many of these attacks, such as data breaches involving children's nurseries including Kiddo and disrupting the British economy to the tune of $2.5 billion through a single attack against Jaguar Land Rover that shut down assembly lines and supply chains. ... Well-designed defenses help blunt many an attacker, or at least slow an intrusion. Enforcing least-privileged access to resources and multifactor authentication always helps, as do concrete security practices designed to block CEO fraud, tricking help desk ploys and other forms of forms of social engineering.


4 New Year’s resolutions for devops success

“Develop a growth mindset that AI models are not good or bad, but rather a new nondeterministic paradigm in software that can both create new issues and new opportunities,” says Matthew Makai, VP of developer relations at DigitalOcean. “It’s on devops engineers and teams to adapt to how software is created, deployed, and operated.” ... A good place to start is improving observability across APIs, applications, and automations. “Developers should adopt an AI-first, prevention-first mindset, using observability and AIops to move from reactive fixes to proactive detection and prevention of issues,” says Alok Uniyal, SVP and head of process consulting at Infosys. ... “Integrating accessibility into the devops pipeline should be a top resolution, with accessibility tests running alongside security and unit tests in CI as automated testing and AI coding tools mature,” says Navin Thadani, CEO of Evinced. “As AI accelerates development, failing to fix accessibility issues early will only cause teams to generate inaccessible code faster, making shift-left accessibility essential. Engineers should think hard about keeping accessibility in the loop, so the promise of AI-driven coding doesn’t leave inclusion behind.” ... For engineers ready to step up into leadership roles but concerned about taking on direct reports, consider mentoring others to build skills and confidence. “There is high-potential talent everywhere, so aside from learning technical skills, I would challenge devops engineers to also take the time to mentor a junior engineer in 2026,” says Austin Spires


New framework simplifies the complex landscape of agentic AI

Agent adaptation involves modifying the foundation model that underlies the agentic system. This is done by updating the agent’s internal parameters or policies through methods like fine-tuning or reinforcement learning to better align with specific tasks. Tool adaptation, on the other hand, shifts the focus to the environment surrounding the agent. Instead of retraining the large, expensive foundation model, developers optimize the external tools such as search retrievers, memory modules, or sub-agents. ... If the agent struggles to use generic tools, don't retrain the main model. Instead, train a small, specialized sub-agent (like a searcher or memory manager) to filter and format data exactly how the main agent likes it. This is highly data-efficient and suitable for proprietary enterprise data and applications that are high-volume and cost-sensitive. Use A1 for specialization: If the agent fundamentally fails at technical tasks you must rewire its understanding of the tool's "mechanics." A1 is best for creating specialists in verifiable domains like SQL or Python or your proprietary tools. For example, you can optimize a small model for your specific toolset and then use it as a T1 plugin for a generalist model. Reserve A2 (agent output signaled) as the "nuclear option": Only train a monolithic agent end-to-end if you need it to internalize complex strategy and self-correction. This is resource-intensive and rarely necessary for standard enterprise applications.


Radio signals could give attackers a foothold inside air-gapped devices

For an attack to work, sensitivity needs to be predictable. Multiple copies of the same board model were tested using the same configurations and signal settings. Several sensitivity patterns appeared consistently across samples, meaning an attacker could characterize one device and apply those findings to another of the same model. They also measured stability over 24 hours to assess whether the effect persisted beyond short test windows. Most sensitive frequency regions remained consistent over time, with modest drift in some paths ... Once sensitive paths were identified, the team tested data reception. They used on-off keying, where the transmitter switches a carrier on for a one and off for a zero. This choice matched the observed behavior, which distinguishes between presence and absence of a signal. Under ideal synchronization, several paths achieved bit error rates below 1 percent when estimated received power reached about 10 milliwatts. One path stayed below 2 percent at roughly 1 milliwatt. Bandwidth tests showed that symbol rates up to 100 kilobits per second remained distinguishable, even as transitions blurred at higher rates. In a longer test, the researchers transmitted about 12,000 bits at 1 kilobit per second. At three meters, reception produced no errors. At 20 meters, the bit error rate reached about 6.2 percent. Errors appeared in bursts that standard error correction could address.


Smart Companies Are Taking SaaS In-House with Agentic Development

The uncomfortable truth: when your critical business processes depend on an AI SaaS vendor’s survival, you’ve outsourced your competitive advantage to their cap table. ... But the deeper risk isn’t operational disruption — it’s strategic surrender. When you pipe your proprietary business context through external AI platforms, you’re training their models on your differentiation. You’re converting what should be permanent strategic assets into recurring operational expenses that drag down EBITDA. For companies evaluating AI SaaS alternatives, the real question is no longer whether to build or buy — but what parts of the AI stack must be owned to protect long‑term competitive advantage. ... “Who maintains these apps?” It’s the right question, with a surprising answer: 1. SaaS Maintenance Isn’t Free — Vendors deprecate APIs, change pricing, pivot features. Your team still scrambles to adapt. Plus, the security risk often comes from having an external third party connecting to internal data. 2. Agents Lower Maintenance Costs Dramatically — Updating deprecated libraries? Agents excel at this, especially with typed languages. The biggest hesitancy — knowledge loss when developers leave — evaporates when agents can explain the codebase to anyone. 3. You Control the Update Schedule — With owned infrastructure, you decide when to upgrade dependencies, refactor components, or add features. No vendor forcing breaking changes on their timeline.


6 cyber insurance gotchas security leaders must avoid

Before committing to a specific insurer, Lindsay recommends consulting an attorney with experience in cyber insurance contracts. “A policy is a legal document with complex definitions,” he notes. “An attorney can flag ambiguous terms, hidden carve-outs, or obligations that could create disputes at claim time,” Lindsay says. ... It’s hardly surprising, but important to remember, that the language contained in cybersecurity policies generally favors the insurer, not the insured. “Businesses often misinterpret the language from their perspective and overlook the risks that the very language of the policy creates,” Polsky warns. ... You may believe your policy will cover all cyberattack losses, yet a look at the fine print may revealed that it’s riddled with exclusions and warranties that can’t be realistically met, particularly in areas such as social engineering, ransomware, and business interruption. ... Many enterprises believe they’re fully secure, yet when they file a claim the insurer points to the fine print about security measures you didn’t know were required, Mayo says. “Now you’re stuck with cleanup costs, legal fees, and potential lawsuits — all without support from your insurance provider.” ... The retroactive date clause can be the biggest cyber insurance trap, warns Paul Pioselli, founder and CEO of cybersecurity services firm Solace. ... Perhaps the biggest mistake an insurance seeker can make is failing to understand the difference between first-party coverage and third-party coverage, and therefore failing to acquire a policy that includes both, says Dylan Tate


7 major IT disasters of 2025

In July, US cleaning product vendor Clorox filed a $380 million lawsuit against Cognizant, accusing the IT services provider’s helpdesk staff of handing over network passwords to cybercriminals who called and asked for them. ... Zimmer Biomet, a medical device company, filed a $172 million lawsuit against Deloitte in September, accusing the IT consulting company of failing to deliver promised results in a large-scale SAP S/4HANA deployment. ... In September, a massive fire at the National Information Resources Service (NIRS) government data center in South Korea resulted in the loss of 858TB of government data stored there. ... Multiple Google cloud services, including Gmail, Docs, Drive, Maps, and Gemini, were taken down during a massive outage in June. The outage was triggered by an earlier policy change to Google Service Control, a control plan service that provides functionality for managed services, with a null-pointer crash loop breaking APIs across several products. ... In late October, Amazon Web Services’ US-EAST-1 region was hit with a significant outage, lasting about three hours during early morning hours. The problem was related to DNS resolution of the DynamoDB API endpoint in the region, causing increased error rates, latency, and new instance launch failures for multiple AWS services. ... In late July, services in Microsoft’s Azure East US region were disrupted, with customers experiencing allocation failures when trying to create or update virtual machines. The problem? A lack of capacity, with a surge in demand outstripping Microsoft’s computing resources.


Stop Guessing, Start Improving: Using DORA Metrics and Process Behavior Charts

The DORA framework consists of several key metrics. Among them, Change Lead Time (CLT) shows how quickly a team can deliver change. Deployment Frequency (DF) shows what the team actually delivers. While important, DF is often more volatile, influenced by team size, vacations, and the type of work being done. Finally, the instability metrics and reliability SLOs serve as a counterbalance. ... Beyond spotting special causes, PBCs are also useful for detecting shifts, moments when the entire system moves to a new performance level. In the commute example above, these shifts appear as clear drops in the average commute time whenever a real improvement is introduced, such as buying a bike or finding a shorter route. Technically, a shift occurs when several consecutive points fall above or below the previous mean, signaling that the process has fundamentally changed. ... Sustainable improvement is rarely linear. It depends on a series of strategic bets whose effects emerge over time. Some succeed, others fail, and external factors, from tooling changes to team turnover, often introduce temporary setbacks. ... According to DORA research, these metrics have a predictive relationship with broader outcomes such as organizational performance and team well-being. In other words, teams that score higher on DORA metrics are statistically more likely to achieve better business results and report higher satisfaction.


5 Threats That Defined Security in 2025

Salt Typhoon is a Chinese state-sponsored threat actor best known in recent memory for targeting telecom giants — including Verizon, AT&T, Lumen Technologies, and multiple others — discovered last fall, targeting the systems used by police for court-authorized wiretapping. The group, also known as Operator Panda, uses sophisticated techniques to conduct espionage against targets and pre-position itself for longer-term attacks. ... CISA layoffs, indirectly, mark a threat of a different kind. At the beginning of the year, the Trump administration cut all advisory committee members within the Cyber Safety Review Board (CSRB), a group run by public and private sector experts to research and make judgments about large issues of the moment. As the CSRB was effectively shuttered, it was working on a report about Salt Typhoon. ... React2Shell describes CVE-2025-55182, a vulnerability disclosed early this month affecting the React Server Components (RSC) open source protocol. Caused by unsafe deserialization, vulnerability was considered easily exploitable and highly dangerous, earning it a maximum CVSS score of 10. Even worse, React is fairly ubiquitous, and at the time of disclosure it was thought that a third of cloud providers were vulnerable. ... In September, a self-replicating malware emerged known as Shai-Hulud. It's an infostealer that infects open source software components; when a user downloads a package infected by the worm, Shai-Hulud infects other packages maintained by the user and publishes poisoned versions, automatically and without much direct attacker input. 


How data-led intelligence can help apparel manufacturers and retailers adapt faster to changing consumer behaviour

AI is already helping retail businesses to understand the complex buying patterns of India’s diverse population. To predict demand, big box chains such as Reliance Retail and e-commerce leaders like Flipkart use machine learning algorithms to analyse historical sales, search patterns and even social media conversations. ... With data-led intelligence studying real-time demand signals, manufacturers can adjust their lines much sooner. If data shows a rising preference for electric scooters in certain cities, for instance, factories can scale up output before the trend peaks. And when interest in a product starts dipping, production can be slowed to prevent excess stock. ... One of the strongest outcomes of the AI wave is its ability to bring consumer demand and industrial supply onto the same page. In the past, customer preferences often evolved faster than factories could react, creating gaps between what buyers wanted and what stores stocked. AI has made this far easier to manage. Manufacturers and retailers now share richer data and insights across the supply chain, allowing production teams to plan with far better clarity. This also enhances supply chain transparency, a growing priority for global buyers seeking traceability. ... If data intelligence tools notice a sharp rise in conversations around eco-friendly packaging or sustainable clothing, retailers can adjust their marketing and stock in advance, while manufacturers source greener materials and redesign processes to match the growing interest.

Daily Tech Digest - May 08, 2022

Your mechanical keyboard isn't just annoying, it's also a security risk

If this has set you on edge then I have both good and bad news for you. The good news is that while this is fairly creepy, it's unlikely that hackers will be able to break into your private space and place a microphone in close enough proximity to your keyboard without you noticing. The bad news is that there are plenty of other ways that your keyboard could be giving away your private information. Keystroke capturing dongles exist that can be plugged into a keyboard’s USB cable, and wireless keyboards can be exploited using hardware such as KeySweeper, a device that can record keyboards using the 2.4GHz frequency when placed in the same room. There are even complex systems that use lasers to detect vibrations or fluctuations in powerlines to record what's being written on a nearby keyboard. Still, if you're a fan of mechanical keyboards then don't let any of this deter you, especially if you use one at home rather than in a public office environment. It's highly unlikely that you need to take extreme measures in your own home and just about everything comes with a security risk these days.


Relational knowledge graphs will transform business

"There have been many generations of algorithms built that have all been created around the idea of a binary one," said Muglia. "They have two tables with the key to join the two together, and then you get a result set, and the query optimizer takes and optimizes the order of those joins — binary join, binary join, binary join!" The recursive problems such as Fred Jones's permissions, he said, "cannot be efficiently solved with those algorithms, period." The right structure for business relationships, as distinct from data relationships, said Muglia, is a knowledge graph. "What is a knowledge graph?" asked Muglia, rhetorically. He offered his own definition for what can be a sometimes mysterious concept. "A knowledge graph is a database that models business concepts, the relationships between them, and the associated business rules and constraints." Muglia, now a board member for startup Relational AI, told the audience that the future of business applications will be knowledge graphs built on top of data analytics, but with the twist that they will use the relational calculus going all the way back to relational database pioneer E.F. Codd.


We Need to Talk about the Software Engineer Grind Culture

SWE culture can be very toxic. Generally, I found that people who get rewarded within software engineering are those who sacrifice their personal time for their project/job. We reward people who code an entire project in 24 hours (I mean, just think about the popularity of hackathons). I remember watching a TikTok from a tech creator and he said that US software engineers are paid so much not because of what they do during work hours, but because of all of the extra work they do outside of it. Ask yourself: are you paid enough to sacrifice your life outside of work? So many of us are conditioned to this rat race. I realized that this grind has caused me to lose out on any hobbies outside of coding. There are so many software engineers who are also tech creators on the side. Whether they have a twitch channel dedicated to coding, making Youtube videos about coding, or a tech content creator on TikTok, it usually has something to do with this specialization in software engineering. The reason these channels are so successful is because we, as software engineers, have bought into this narrative.


Managing Tech Debt in a Microservice Architecture

This company has a lot of dedicated and smart engineers, which most probably explains how they were able to come up with what they call the technology capability plan. I find the TCP to be a truly innovative community approach to managing tech debt. I've not seen anything like it anywhere else. That's why I'm excited about it and want to share what we have learned with you. Here is the stated purpose of the TCP. It is used by and for engineering to signal intent to both engineering and product, by collecting, organizing, and communicating the ever-changing requirements in the technology landscape for the purposes of architecting for longevity and adaptivity. In the next four slides of this presentation, I will show you how to foster the engineering communities that create the TCP. You will learn how to motivate those communities to craft domain specific plans for paying down tech debt. We will cover the specific format and purpose of these plans. We will then focus on how to calculate the risk for each area of tech debt, and use that for setting plan priorities. 


Shedding Light On Toil: Ways Engineers Can Reduce Toil

More proactive monitoring is another way to reduce toil, according to Englund and Davis. “Responding to a crash loop is responding too late,” added Davis. Instead, he advocated that SREs look toward leading indicators that suggest the potential for failure so that teams can make adjustments well before anything drastic occurs. If SLIs like error rate and latency are getting bad, you must take reactive measures to fix them, causing more toil. Instead, proactive monitoring is best to see the cresting wave before the flood. Leading indicators could arise from following things like data queue operations connected to servers or the saturation of a particular resource. “If you can figure out when you’re about to fail, you can be prepared to adapt,” said Davis. One major caveat of standardization is that you’re inevitably going to encounter edge cases that require flexibility. And when an outage or issue does arise, the remediation process is often very unique from case to case. As a result, not all investment into standardization pays out. Alternatively, teams that know how to improvise together are proven to be better equipped for unforeseen incidents


Are your SLOs realistic? How to analyze your risks like an SRE

You can reduce the impact on your users by reducing the percentage of infrastructure or users affected or the requests (e.g., throttling part of the requests vs. all of them). In order to reduce the blast radius of outages, avoid global changes and adopt advanced deployments strategies that allow you to gradually deploy changes. Consider progressive and canary rollouts over the course of hours, days, or weeks, which allow you to reduce the risk and to identify an issue before all your users are affected. Further, having robust Continuous Integration and Continuous Delivery (CI/CD) pipelines allows you to deploy and roll back with confidence and reduce customer impact. Creating an integrated process of code review and testing will help you find the issues early on before users are affected. Improving the time to detect means that you catch outages faster. As a reminder, having an estimated TTD expresses how long until a human being is informed of the problem.


5 Ways to Drive Mature SRE Practices

Project failure — and the way it’s regarded within the organization — is often as important as success. To create maximum value, SREs must be free to experiment and work on strategic projects that push the boundaries, understanding they will fail as often as they succeed. However, according to the “State of SRE Report,” only a quarter of organizations accept the “fail fast, fail often” mantra. To mature their practice, enterprises must free SREs from the traditional cost constraints placed upon IT and encourage them to challenge accepted norms. They should be setting new benchmarks for innovative design and engineering practices, not be bogged down in the minutiae of development cycles. Running hackathons and bonus schemes focused on reliability improvements is a great way to uplevel SREs and encourage an organizational culture of learning and experimentation, where failure is valued as much as success. Measurement is critical to developing any IT program, and SRE is no exception. To truly understand where performance gaps are and optimize critical user journeys, SREs need to go beyond performance monitoring data.


The Future of Data Management: It’s Already Here

Data fabric can automatically detect data abnormalities and take appropriate steps to correct them, reducing losses and improving regulatory compliance. A data fabric enables organizations to define governance norms and controls, improve risk management, and improve monitoring—something that is increasing in importance given legal standards for data governance and risk management have become more demanding and compliance/governance vital. It also enhances cost savings through the avoidance of potential regulatory penalties. A data fabric represents a fundamentally different way of connecting data. Those who have adopted one now understand that they can do many things differently, providing an excellent route for enterprises to reconsider a host of issues. Because data fabrics span the entire range of data work, they address the needs of all constituents: developers, business analysts, data scientists, and IT team members collectively. As a result, POCs will continue to grow across departments and divisions. 


Why Data Catalogs Are the Standard for Data Intelligence

Gartner positions a data catalog as the foundation “to access and represent all metadata types in a connected knowledge graph.” To illustrate, I’ll share a personal experience about why I think a data catalog is crucial to data intelligence. Some years ago, when I worked at a large global technology company, my manager said, “I want you to figure out what metrics we should measure and tell us if our product is making our customers successful. We don’t have the data or analysis today.” I was surprised. How could that be? How can a successful enterprise not have the data model in place to measure a market-leading product? Have they based their decisions on gut instinct? As part of my work, I had to create some hypotheses, gather data, analyze it, and create a recommendation. To start, I had to find an expert who had a significant amount of tribal knowledge and could explain what data existed, where it was located, what it meant, how I should use it, and what pitfalls I might encounter when using it. Next, I had to get the data from the data warehouse and write a lot of SQL queries, all while finding the data science people to get their help.


An enterprise architecture approach to ESG

Often, and especially when looked at through a holistic enterprise architecture approach, achieving or reporting on certain ESG goals (or seizing on innovative new opportunities that ESG brings about) will not be possible through isolated tech changes, but in fact, require a more holistic digital transformation. An EA-supported ESG assessment will give an accurate view of the costs and benefits of an organisation's overall IT portfolio. Architecture lenses will then help to make the decisions necessary for ESG-related digital investment and/or transformation. For example, the high energy footprint of business IT systems is becoming an increasing focus of ESG concern.6,7 As a consequence, organisations are feeling significant pressure to move to ‘clean-IT,' optimising the trade-off between energy consumption and computational performance, and incorporating algorithmic and computational efficiencies in IT solutions and designs. Meeting ESG future states will likely require digitalisation and emerging technologies such as IoT, digital twins, big data, and AI. 



Quote for the day:

"At the heart of great leadership is a curious mind, heart, and spirit." -- Chip Conley

Daily Tech Digest - October 18, 2018

How Financial Institutions Can Put Risk Management Back in the Driver’s Seat

The benefits of putting the business clearly in charge of risk management — and holding it accountable — are significant. First, holding the first line accountable for risk management aligns the interests of internal revenue generators with those of the overall firm. When first-line salespeople do their own risk generation “driving,” they gain an understanding of their firm’s position and reputation that they otherwise might not get. They are thus less likely to try to on-board a questionable client or put together loan proposals that may be rejected. In general, the business needs to be clearly accountable for managing the risks it takes in pursuit of its objectives. A system of first-line front-seat drivers also encourages people to keep their eye out for risks wherever they pop up, rather than relying on the oversight specialist — the backseat driver — to point them out. This improves performance by allowing the business to spot some risks sooner, manage them more nimbly, and react more quickly when things do go wrong.



Too many business executives view the analytics transformation too narrowly: They tend to view it as centering on tools or being driven by the hiring of "big data" analysts or machine learning expertise. Almost without exception, every successful analytics transformation that I've seen or experienced has started at the top - this transformation is as much, perhaps, even more, driven by a cultural change as it is by your hiring of new resources and technical expertise. Even today, however, too many business leaders and executives tend to be intimidated by analytics and math. My guidance to you (and you know who you are) is to get educated - fast. That does not mean that you have to get the equivalent of a Masters Degree in operations research or machine learning. It does mean seeking out an expert who can frame these capabilities in the language and the vernacular of a senior executive. This is the most perilous part of the journey because too many senior executives delegate this transformation to lower levels of the organization where it gets lost in a sea of other priorities.



The Future of the Cloud Depends on Magnetic Tape


Although the century-old technology has disappeared from most people’s daily view, magnetic tape lives on as the preferred medium for safely archiving critical cloud data in case, say, a software bug deletes thousands of Gmail messages, or a natural disaster wipes out some hard drives. The world’s electronic financial, health, and scientific records, collected on state-of-the-art cloud servers belonging to Amazon.com, Microsoft, Google, and others, are also typically recorded on tape around the same time they are created. Usually the companies keep one copy of each tape on-site, in a massive vault, and send a second copy to somebody like Iron Mountain. Unfortunately for the big tech companies, the number of tape manufacturers has shrunk over the past three years from six to just two—Sony Corp. and Fujifilm Holdings Corp.—and each seems to think that’s still one too many. The Japanese companies have said the tape business is a mere rounding error as far as they’re concerned, but each has spent millions of dollars arguing before the U.S. International Trade Commission to try to ban the other from importing tapes to America.


Solving the cloud infrastructure misconfiguration problem

The threats to cloud infrastructure are automated, so automated remediation is a requirement to effectively manage misconfiguration risk. His advice to CISOs is to set up a team that includes developers who understand cloud APIs and can automate every repetitive aspect of cloud security, starting with cloud configuration. “In order to be effective, the CISO needs to view their security team as an internal tool vendor in the cloud ecosystem. Development teams need support from security to move quickly, but also require good guard rails and feedback for how to do cloud securely,” he opines. “This security automation team led by the CISO needs to work closely with development teams to establish known-good configuration baselines using a whitelist approach that conforms with compliance and security policy. Once you have a known-good baseline, you can automate the remediation process for misconfiguration without running the risk of false positives leading to bad changes that can cause system downtime events.”


Quantum Computing: Why You Should Pay Attention


As databases continue to grow in size, this improvement can make it more feasible to handle the large volumes of data expected to come online in the coming years and decades as we reach physical limits in storage device latencies. Another practical advantage lies in our understanding of the world. Simulating quantum effects is notoriously difficult using the computers we rely on today, as the very fundamentals of quantum mechanics are vastly at odds with today’s devices. Using quantum computers, simulating these effects will be far simpler, allowing us to better unravel the mysteries of quantum mechanics. Even when quantum computing becomes common, it’s difficult to envision it completely replacing traditional computer devices. The types of applications at which quantum computers excel don’t seem to have much practical use for typical computer users. Furthermore, it will take some time for quantum computers to become smaller and more affordable, and there may be barriers preventing its widespread use.


Authentication Bypass in libSSH Leaves Servers Vulnerable

“Careful reading of code for the affected libSSH library indicated that it was possible to bypass authentication by presenting to the server an SSH2_MSG_USERAUTH_SUCCESS message in place of the SSH2_MSG_USERAUTH_REQUEST message which the server would expect to initiate authentication. The SSH2_MSG_USERAUTH_SUCCESS handler is intended only for communication from the server to the client,” an advisory by Peter Winter-Smith of NCC Group, who discovered the bug, says. In other words, an attacker can connect to any vulnerable server, without authentication, just by sending one message to the server. ... “Not all libSSH servers will necessarily be vulnerable to the authentication bypass; since the authentication bypass sets the internal libSSH state machine to authenticated without ever giving any registered authentication callbacks an opportunity to execute, servers developed using libSSH which maintain additional custom session state may fail to function correctly if a user is authenticated without this state being created,” the advisory says.


Learn Why Doctors Look To Data To Increase Patient Engagement


We’re positively swimming in data. But all that noise stands a good chance of confusing or distracting patients from their ultimate goal of ongoing good health if doctors and patients don’t come to the table together with a plan and a common understanding of which data points are meaningful in context and which are not.  There’s no doubt anymore: Big data is going to revolutionize the way we administer health care throughout the world and help us achieve financial savings. But as doctors look to leverage modern tools for interacting with and sharing patient health data, there are several factors to remember and several key advantages worth checking out. Here’s a rundown. Regrettably, we still lack a cure for many chronic diseases. Therefore, doctors and their patients must instead “manage” these conditions. It’s possible to live a full and active life while undergoing treatment for severe diseases and conditions, but only with the right levels of vigilance and engagement. Patients with chronic illnesses must maintain their motivation, their attention to treatment and medication schedules and their general knowledgeability about their condition.


Wärtsilä Opens World's First International Maritime Cyber Centre

IMCCE
“Cyber is such a critical topic to all players in marine. Taking stewardship in something as important as this, shows that Wärtsilä is committed to transform and digitalize the marine industry. This is the next step in our Smart Marine vision and supports our Oceanic Awakening and Sea20 initiatives,” says Marco Ryan, Chief Digital Officer at Wärtsilä. “There are three main drivers for the maritime industry to collaborate in improving our cyber resiliency: the vast attack surface that the maritime industry offers to cyber criminals; the inclusion of maritime into the critical national infrastructure of nation states and the pending cyber security regulation by the International Maritime Organisation in 2021,” says Mark Milford, Vice President, Cyber Security at Wärtsilä. The MCERT is an international cyber intelligence and incident support platform enhancing cyber resilience for the entire maritime ecosystem. It provides international intelligence feeds, advice and support, including real-time assistance to members on cyber attacks and incidents, and a Cyber Security Reporting Portal (CSRP) for its members.


Arm’s Neoverse will be the infrastructure for a trillion intelligent devices


Arm’s Neoverse intellectual property will take advantage of high-end manufacturing equipment in chip factories. The Ares platform will debut in 2019 with 30 percent per generation performance improvements, Henry said. The designs will be flexible for customer purposes and security will be a crucial part of the platform, Henry said. Ares will be built on seven-nanometer circuitry in the newest chip factories. Follow-up chips include the Zeus at seven nanometers and Poseidon at 5 nanometers. The Arm Neoverse will include advanced processor designs as well as solutions and support for hardware, software, tools, and services. The company announced the platform at the Arm TechCon event in San Jose, California. “Arm has been more successful in infrastructure than many knew. This makes sense as many networking and storage systems have Arm-based chips inside, albeit with smaller cores,” said Patrick Moorhead, analyst at Moor Insights & Strategy, in an email.


What Innovative CEOs and Leaders Need to Know about AI

istockphoto
The authors view AI as “performing tasks, not entire jobs.” Out of the 152 AI projects, 71 were in the automation of digital and physical tasks, 57 were using algorithms to identify patterns for business intelligence and analytics, and 24 were for engaging employees and customers through machine learning, intelligent agents, and chatbots. In the Harvard Business Review article, a 2017 Deloitte survey of 250 executives who were familiar with their companies’ AI initiatives, revealed that 51 percent responded that the primary goals were to improve existing products. 47 percent identified integrating AI with existing processes and systems as a major obstacle. ... Early adopters of AI in the enterprise are reporting benefits — 83 percent indicated their companies have already achieved “moderate (53 percent) or substantial (30 percent) economic benefits. 58 percent of respondents are using in-house resources versus outside expertise to implement AI, and 58 percent are using AI software from vendors.



Quote for the day:


"Leadership is a potent combination of strategy and character. But if you must be without one, be without the strategy." -- Norman Schwarzkopf


Daily Tech Digest - September 28, 2017

Professor Harish Bhaskaran of Oxford, who led the team, said “The development of computers that work more like the human brain has been a holy grail of scientists for decades. Via a network of neurons and synapses the brain can process and store vast amounts of information simultaneously, using only a few tons of Watts of power. Conventional computers can’t come close to this sort of performance.” Daniel C. Wright, a co-author from the Exeter team, added that “Electronic computers are relatively slow, and the faster we make them the more they consume. Conventional computers are also pretty ‘dumb,’ with none of the in-built learning and parallel processing capabilities of the human brain. We tackle both of these issues here — not only by developing not only new brain-like computer architectures, but also by working in the optical domain to leverage the huge speed and power advantages of the upcoming silicon photonics revolution.”


Before you deploy OpenStack, address cost, hybrid cloud issues

Training can become an indirect OpenStack cost. IT and developer staff may not have the requisite skill sets needed to tackle an OpenStack deployment. You may need to find more OpenStack-savvy staff to handle the job, spend the money to train up existing staff as Certified OpenStack Administrators, hire consultants to jump-start the work or some combination of these tactics. Consider the implications of OpenStack support. Organizations can certainly adopt a canned OpenStack distribution and associated support from vendors like Red Hat or Rackspace. As open source software acquired directly, however, there is no official support. If you choose to deploy OpenStack, assemble a suite of support resources to address inevitable questions or to resolve problems. Some resources are free, while other resources will incur added costs.


To combat phishing, you must change your approach

To combat phishing, you must change your approach
The threat surface is growing, and cybercriminals are becoming more sophisticated. They’re utilizing threat tactics that have made it increasingly difficult for organizations to protect themselves at scale. Cyber criminals are putting pressure on businesses by increasing the volume of these kinds of targeted attacks, dramatically outpacing even the world’s largest security teams’ ability to keep up. Visibility is sadly lacking within most of today’s organizations, and it’s unrealistic for security teams to secure something they can’t see. There’s no tool or widget that can totally fix this and make everything safe. But we can get to a point where we have the ability to construct a security program that reduces risk in a demonstrable way. We can establish metrics for where your risk profile is today.


Fintech’s future is in the back end

Fear that their money would ultimately be spent on on-premise, and therefore nonscalable, technology has been another reason investors have shied away from the opportunity. This fear arises from the tendency of institutions to want to keep a new technology “in the institution” because of security concerns. However, technology has matured enough to meet the reasonably strict security requirements banks impose on partners and vendors. Just six years ago, only 64% of global financial firms had adopted a cloud application, according to research from Temenos. But now, security has dramatically improved in cloud applications and banks are willing to adopt the technology at scale. This is evidenced in both cloud solution adoption and also the industry’s growing willingness to embrace an open banking framework.


WannaCry an example of pseudo-ransomware, says McAfee

WannaCry may have been a proof of concept, but the true propose, he said, was to cause disruption, which is consistent with what researchers are learning when going undercover as ransomware victims to ransomware support forums. “When one of our researchers asked why a particular ransom was so low, the ransomware support representative told her that those operating the ransoware had already been paid by someone to create and run the ransomware campaign to disrupt a competitor’s business,” said Samani. “The game has changed. The reality is that any organisation can hire someone to disrupt a competitor’s business operations for less than the price of a cup of coffee.” In the face of this reality, Samani said the security industry and society as a whole has to “draw a line in the sand”


The Digital Intelligence Of The World's Leading Asset Managers 2017

Where once the asset management sector was a digital desert, websites and social media channels abound. Whilst this represents genuine progress, the content and functionality within them leaves a lot to be desired in most cases. Quality search functionality is hard to find, websites resemble glorified CVs and blogs read like technical manuals. As for thought leadership, well there’s little thought and no leadership. Social media, especially Twitter and Linkedin, are swamped with relentless HR tweets and duplicate updates. It’s clear that asset managers are missing an opportunity to create content that resonates with FAIs and can build lasting two-way relationships. Over the following pages we present our findings in detail and take a closer look at the digital successes and failures within the world’s leading asset managers.


Heads in the cloud: banks inch closer to cloud take-up

On the one hand, cloud providers – such as the leader of the pack, Amazon Web Services – are likely to have security processes and technology that are at least as advanced as those of their banking clients, thanks to their technical expertise and economies of scale. On the other hand, providers can pass on a bank’s data or system management to yet another contractor, increasing security risks present in traditional outsourcing. The EU’s General Data Protection Regulation, coming into force next year, will up the ante on data security. The new rules require, among other things, that bank customers are able to request that their personal data held is deleted. One practical outcome, say lawyers, is that banks will have to clarify to cloud providers exactly how they should handle


Inside the fight for the soul of Infosys


Murthy criticized Sikka's pay and his use of private jets, and claimed that corporate governance standards had eroded during his tenure. Saying he could no longer run the company amid such criticism from a company founder, Sikka resigned as chief executive on Aug. 18 and left the board six days later. Three other directors followed him out the door, including the former chairman, R. Seshasayee. Murthy's criticisms haven't let up since Sikka's resignation. Speaking to shareholders on Aug. 29, he detailed his "concerns as a shareholder" over how the company's board members approved a severance package worth roughly 170 million rupees ($2.65 million) for former Chief Financial Officer Rajiv Bansal, who left the company in October 2015.


Should CISOs join CEOs in the C-suite?

A working partnership between the CIO and the CISO is clearly a successful formula, regardless of who reports to whom. “CISOs should report to the CEO with further exposure and responsibility to the board of directors,” says Alp Hug, founder and COO at Zenedge, a DDoS and malware protection vendor. “The time has come for boardrooms to consider cybersecurity a key requirement of every organization's core infrastructure along with a financial system, HRMS, CRM, etc., necessary to ensure the livelihood and continuity of the business.” If a board of directors says defending their organization against cyber crime and cyber warfare is a top priority, then they’ll demonstrate it by inviting their CISO into the boardroom. “Of course CISOs and equivalents will say they should report to the CEO,” says John Daniels


The ins and outs of NoSQL data modelling

Data modelling is critical to understanding data, its interrelationships, and its rules. A data model is not just documentation, because it can be forward-engineered into a physical database. In short, data modelling solves one of the biggest challenges when adopting NoSQL technology: harnessing the power and flexibility of dynamic schemas without falling in the traps that a lack of design structure can create for teams. It eases the on-boarding of NoSQL databases and legitimises the adoption in the enterprise roadmap, corporate IT architecture, and organisational data governance requirements. More specifically, it allows us to define and marry all the various contexts, ontologies, taxonomies, relationships, graphs, and models into one overarching data model.



Quote for the day:


"If you realize you aren't so wise today as you thought you were yesterday, you're wiser today." -- Olin Miller