Showing posts with label predictive analytics. Show all posts
Showing posts with label predictive analytics. Show all posts

Daily Tech Digest - June 23, 2025


Quote for the day:

"Sheep are always looking for a new shepherd when the terrain gets rocky." -- Karen Marie Moning


The 10 biggest issues IT faces today

“The AI explosion and how quickly it has come upon us is the top issue for me,” says Mark Sherwood, executive vice president and CIO of Wolters Kluwer, a global professional services and software firm. “In my experience, AI has changed and progressed faster than anything I’ve ever seen.” To keep up with that rapid evolution, Sherwood says he is focused on making innovation part of everyday work for his engineering team. ... “Modern digital platforms generate staggering volumes of telemetry, logs, and metrics across an increasingly complex and distributed architecture. Without intelligent systems, IT teams drown in alert fatigue or miss critical signals amid the noise,” he explains. “What was once a manageable rules-based monitoring challenge has evolved into a big data and machine learning problem.” He continues, saying, “This shift requires IT organizations to rethink how they ingest, manage, and act upon operational data. It’s not just about observability; it’s about interpretability and actionability at scale. ... CIOs today are also paying closer attention to geopolitical news and determining what it means for them, their IT departments, and their organizations. “These are uncertain times geopolitically, and CIOs are asking how that will affect IT portfolios and budgets and initiatives,” Squeo says.


Clouded judgement: Resilience, risk and the rise of repatriation

While the findings reflect growing concern, they also highlight a strategic shift, with 78% of leaders now considering digital sovereignty when selecting tech partners, and 68% saying they will only adopt AI services where they have full certainty over data ownership. For some, the answer is to take back control. Cloud repatriation is gaining some traction, at least in terms of mindset, but as yet, this is not translating into a mass exodus from the hyperscalers. And yet, calls for digital sovereignty are getting louder. In Europe, the Euro-Stack open letter has reignited the debate, urging policymakers to champion a competitive, sovereign digital infrastructure. But while politics might be a trigger, the key question is not whether businesses are abandoning cloud (most aren’t) but whether the balance of cloud usage is changing, driven as much by cost as performance needs and rising regulatory risks. ... “Despite access to cloud cost-optimisation teams, there was limited room to reduce expenses,” says Jonny Huxtable, CEO of LinkPool. After assessing bare-metal and colocation options, LinkPool decided to move fully to Pulsant’s colocation service. The company claims the move achieved a 90% to 95% cost reduction alongside major performance improvements and enhanced disaster recovery capabilities.


Cookie management under the Digital Personal Data Protection Act, 2023

Effective cookie management under the DPDP Act, as detailed in the BRDCMS, requires real time updates to user preferences. Users must have access to a dedicated cookie preferences interface that allows them to modify or revoke their consent without undue complexity or delay. This interface should be easily accessible, typically through privacy settings or a dedicated cookie management dashboard. The real-time nature of these updates is crucial for maintaining compliance with the principles of consent as enshrined under the DPDP Act. When a user withdraws consent for specific cookie categories, the system must immediately cease the collection and processing of data through those cookies, ensuring that the user’s privacy preferences are respected without delay. Transparency is one of the fundamental pillars of the DPDP Act and extends to cookie usage disclosure. While the DPDP Act itself remains silent on specific cookie policies, the BRDCMS mandates the provision of a clear and accessible cookie policy. Organisations must provide clear and accessible cookie policies which outline the purposes of cookie usage, the data sharing practices and the implications of different consent choices. The cookie policy serves as a comprehensive resource enabling users to make informed decisions of their consent preferences. 


AI agents win over professionals - but only to do their grunt work, Stanford study finds

According to the report, the majority of workers are ready to embrace agents for the automation of low-stakes and repetitive tasks, "even after reflecting on potential job loss concerns and work enjoyment." Respondents said they hoped to focus on more engaging and important tasks, mirroring what's become something of a marketing mantra among big tech companies pushing AI agents: that these systems will free workers and businesses from drudgery, so they can focus on more meaningful work. The authors also noted "critical mismatches" between the tasks that AI agents are being deployed to handle -- such as software development and business analysis -- and the tasks that workers are actually looking to automate. ... The study could have big implications for the future of human-AI collaboration in the workplace. Using a metric that they call the Human Agency Scale (HAS), the authors found "that workers generally prefer higher levels of human agency than what experts deem technologically necessary." ... The report further showed that the rise of AI automation is causing a shift in the human skills that are most valued in the workplace: information-processing and analysis skills, the authors said, are becoming less valuable as machines become increasingly competent in these domains, while interpersonal skills -- including "assisting and caring for others" -- is more important than ever.


New OLTP: Postgres With Separate Compute and Storage

The traditional methods for integrating databases are complex and not suited to AI, Xin said. The challenge lies in integrating analytics and AI with transactional workloads. Consider what developers would do when adding a feature to a code base, Xin said in his keynote address at the Data + AI Summit. They’d create a new branch of the codebase and make changes to the new branch. They’d use that branch to check bugs, perform testing and so on. Xin said creating a new branch is an instant operation. What’s the equivalent for databases? You only clone your production databases. It might take days. How do you set up secure networking? How do you create ETL pipelines and log data from one to another? ... Streaming is now a first-class citizen in the enterprise, Mohan told me. The separation of compute and storage makes a difference. We are approaching an era when applications will scale infinitely, both in terms of the number of instances and their scale-out capabilities. And that leads us to new questions about how we start to think about evaluation, observability and semantics. Accuracy matters. ... ADP may have the world’s best payroll data, Mohan said, but then that data has to be processed through ETL into an analytics solution like Databricks. Then comes the analytics and the data science work. The customer has to perform a significant amount of data engineering work and preparation.


Can AI Save Us from AI? The High-Stakes Race in Cybersecurity

Reluctant executives and budget hawks can shoulder some of the responsibility for slow AI adoption, but they’re hardly the only barriers. Increasingly, employees are voicing legitimate concerns about surveillance, privacy and the long-term impact of automation on job security. At the same time, enterprises may face structural issues when it comes to integration: fragmented systems, a lack of data inventory and access controls, and other legacy architectures can also hinder the secure integration and scalability of AI-driven security solutions. Meanwhile, bad actors face none of these considerations. They have immediate, unfettered access to open-source AI tools, which can enhance the speed and force of an attack. They operate without AI tool guardrails, governance, oversight or ethical constraints. ... Insider threat detection is also maturing. AI models can detect suspicious behavior, such as unusual access to data, privilege changes or timing inconsistencies, that may indicate a compromised account or insider threat. Early adopters, such as financial institutions, are using behavioral AI to flag synthetic identities by spotting subtle deviations that traditional tools often lack. They can also monitor behavioral intent signals, such as a worker researching resignation policies before initiating mass file downloads, providing early warnings of potential data exfiltration.


The complexities of satellite compute

“In cellular communications on the ground, this was solved a few decades ago. But doing it in space, you have to have the computing horsepower to do those handoffs as well as the throughput capability.” This additional compute needs to be in "a radiation tolerant form, and in such a way that they don't consume too much power and generate too much heat to cause massive thermal problems on the satellites." In LEO, satellites face a barrage of radiation. "It's an environment that's very rich in protons," O'Neill says. "And protons can cause upsets in configuration registers, they can even cause latch-ups in certain integrated circuits." The need to be more radiation tolerant has also pushed the industry towards newer hardware as, the smaller the process node, the lower the operating voltage. "Reducing operating voltage makes you less susceptible to destructive effects," O'Neill explains. One issue, a single event latch up, sees the satellite conduct a lot of current from power to ground through the integrated circuit, potentially frying it. ... Modern integrated circuits are a lot less susceptible to these single-event latch-ups, but are not completely immune. "While the core of the circuit may be operating at a very low voltage, 0.7 or 0.8 volts, you still have I/O circuits in the integrated circuit that may be required to interoperate with other ICs at 3.3 volts or 2.5 volts," O'Neill adds.


How CISOs can justify security investments in financial terms

A common challenge we see is the absence of a formal ERM program, or the fragmentation of risk functions, where enterprise, cybersecurity, and third-party risks are evaluated using different impact criteria. This lack of alignment makes it difficult for CISOs to communicate effectively with the C-suite and board. Standardizing risk programs and using consistent impact criteria enables clearer risk comparisons, shared understanding, and more strategic decision-making. This challenge is further exacerbated by the rise of AI-specific regulations and frameworks, including the NIST AI Risk Management Framework, the EU AI Act, the NYC Bias Audit Law, and the Colorado Artificial Intelligence Act. ... Communicating security investments in clear, business-aligned risk terms—such as High, Medium, or Low—using agreed-upon impact criteria like financial exposure, operational disruption, reputational harm, and customer impact makes it significantly easier to justify spending and align with enterprise priorities. ... In our Virtual CISO engagements, we’ve found that a risk-based, outcome-driven approach is highly effective with executive leadership. We frame cyber risk tolerance in financial and operational terms, quantify the business value of proposed investments, and tie security initiatives directly to strategic objectives. 


From fear to fluency: Why empathy is the missing ingredient in AI rollouts

In the past, teams had time to adapt to new technologies. Operating systems or enterprise resource planning (ERP) tools evolved over years, giving users more room to learn these platforms and acquire the skills to use them. Unlike previous tech shifts, this one with AI doesn’t come with a long runway. Change arrives overnight, and expectations follow just as fast. Many employees feel like they’re being asked to keep pace with systems they haven’t had time to learn, let alone trust. A recent example would be ChatGPT reaching 100 million monthly active users just two months after launch. ... This underlines the emotional and behavioral complexity of adoption. Some people are naturally curious and quick to experiment with new technology while others are skeptical, risk-averse or anxious about job security. ... Adopting AI is not just a technical initiative, it’s a cultural reset, one that challenges leaders to show up with more empathy and not just expertise. Success depends on how well leaders can inspire trust and empathy across their organizations. The 4 E’s of adoption offer more than a framework. They reflect a leadership mindset rooted in inclusion, clarity and care. By embedding empathy into structure and using metrics to illuminate progress rather than pressure outcomes, teams become more adaptable and resilient.


Why networks need AIOps and predictive analytics

Predictive Analytics – a key capability of AIOps – forecasts future network performance and problems, enabling early intervention and proactive maintenance. Further, early prediction of bottlenecks or additional requirements helps to optimise the management of network resources. For example, when organisations have advance warning about traffic surges, they can allocate capacity to prevent congestion and outages, and enhance overall network performance. A range of mundane tasks, from incident response to work order generation to network configuration to proactive IT health checks and maintenance scheduling, can be automated with AIOps to reduce the load on IT staff and free them up to concentrate on more strategic activities. ... When traditional monitoring tools were unable to identify bottlenecks in a healthcare provider’s network that was seeing a slowdown in its electronic health records (EHR) system during busy hours, a switch to AIOps resolved the problem. By enabling observability across domains, the system highlighted that performance dipped when users logged in during shift changes. It also predicted slowdowns half an hour in advance and automatically provisioned additional resources to handle the surge in activity. The result was a 70 percent reduction in the most important EHR slowdowns, improvement in system responsiveness, and freeing up of IT human resources.

Daily Tech Digest - March 21, 2025


Quote for the day:

"A leader is one who knows the way, goes the way, and shows the way." -- John C. Maxwell



Synthetic data and the risk of ‘model collapse’

There is a danger of an ‘ouroboros’ here, or a snake eating its own tail. Models can be ‘poisoned’ with data that is passed on in addition to malicious prompts. While usually caused by sabotage, this can also be unintentional: AI models sometimes hallucinate, including when they are generating data for their LLM descendant. With enough ongoing errors, a new LLM risks performing worse than its predecessors. At its core, it’s a simple case of garbage in, garbage out. The logical end state is a total ‘model collapse‘, where drivel overtakes anything factual and makes an LLM dysfunctional. Should this happen (and it may have happened with GPT-4.5), AI model makers are forced to pull back to an earlier checkpoint, reassess their data or be forced to make architectural changes. ... In short, a high degree of expertise is required for each step in the AI process. Currently, attention is focused on the initial building of the foundation models on the one hand and the actual implementation of GenAI on the other. The importance of training data was touched upon in 2023 because online organizations regularly felt robbed. In essence: it made headlines, which is why we all became aware of the intricacies of training data. Now that the flow of online retrievable data is ending, AI players are grasping for an alternative that is creating new problems.


Automated Workflow Perfection Is a Job in Itself

“The fragmented nature of automation – spanning robotic process automation, business process management, workflow tools and AI-powered solutions all further complicates consistent measurement,” lamented Gaudette. “Market segment overlap presents another challenge. As technologies increasingly converge, traditional category boundaries blur. A document processing solution might be classified under workflow automation by one analyst and digital process automation by another, creating inconsistent market size calculations.” Other survey “findings” from Custom Workflows’ analysis report suggest that the integration of artificial intelligence with traditional automation represents a particularly powerful growth catalyst. McKinsey’s own analysis reveals that while basic automation delivers 20-30% cost reductions, intelligent automation incorporating AI can achieve 50-70% savings while simultaneously improving quality and customer experience. ... As the market for workflow automation now goes into what we might call an amplified state of flux, it appears that current automation adoption follows a classic bell curve distribution, with most organizations clustered in the middle stages of implementation maturity. Surprisingly, smaller organizations often outperform their larger counterparts when it comes to automation success. 


The hidden risk in SaaS: Why companies need a digital identity exit strategy

To reduce dependency on external SaaS providers, organizations should consider taking back control of their digital identity infrastructure. This doesn’t mean abandoning cloud services altogether, but rather strategically deploying identity management solutions that provide ownership and portability. Self-hosted identity solutions running on private cloud or on-premises environments can offer greater control. Businesses should also consider multi-cloud identity architectures allowing authentication and access control to function across different cloud providers.  ... Organizations must closely monitor data sovereignty laws and adjust their infrastructure accordingly. Ensuring that identity solutions comply with shifting regulations will help avoid legal and operational risks. To avoid being caught off guard, it’s important for IT teams to understand what’s going on behind the scenes rather than entirely outsourcing their infrastructure. For the highest level of preparedness, organizations can manage identity infrastructure systems themselves, reducing reliance on third party SaaS companies for critical functions. If teams understand the inner workings of their identity management, they will be better placed to develop an emergency response plan with predefined steps to transition services in case of sudden geopolitical changes.


Why Your Business Needs an AI Innovation Unit

An AI innovation unit should always support sustainable and strategic organizational growth through the ethical and impactful application and integration of AI, McDonagh-Smith says. "Achieving this mission involves identifying and deploying AI technologies to solve complex and simple business problems, improving efficiency, cultivating innovation, and creating measurable new organizational value." A successful unit, McDonagh-Smith states, prioritizes aligning AI initiatives with the enterprise's long-term vision, ensuring transparency, fairness, and accountability in its AI applications. ... An AI innovation unit leader is foremost a business leader and visionary, responsible for helping the enterprise embrace and effectively use AI in an ethical and responsible manner, Hall says. "The leader needs to understand the risk and concerns, but also AI governance and frameworks." He adds that the leader should also be realistic and inspiring, with an understanding of the hype curve and the technology's potential. ... An AI innovation unit requires a collaborative culture that bridges silos within the organization and commits to continuous reflection and learning, McDonagh-Smith says. "The unit needs to establish practical partnerships with academic institutions, tech startups, and AI thought leadership groups to create flows of innovation, intelligence, and business insights."


How to avoid the AI complexity trap

When done right, AI enables simplicity, cutting across layers of complexity -- but with limits. "AI is not a silver bullet," said Richard Demeny, a software development consultant, formerly with Arm. "LLMs under the hood actually use probabilities, not understanding, to give answers. It's humans who design, build, and implement systems, and while AI may automate some entry-level roles and certainly bring significant productivity gains, it cannot replace the amount of practical experience IT decision-makers need to make the right trade-offs." ... To keep both AI and IT complexity at bay, "deployment of AI needs to be thoughtful," said Hashim. "Focus on the simplicity of user experience, quality of AI, and its ability to get things done," she said. "Uplevel all your employees with AI so that your organization as a whole can be more productive and happy." Consistency is the key to managing complexity, Howard said. Platforms, for example, "make things consistent. So you're able to do things -- sometimes very complicated things -- in consistent ways and standard ways that everybody knows how to use them. Even something as simple as definitions or taxonomy. If everybody is speaking the same language, so a simplified taxonomy, then it's much easier to communicate."  


Outsmart the skills gap crisis and build a team without recruitment

Team augmentation involves engaging external software engineers from a partner company to complement an existing in-house team. This approach provides companies with the flexibility to quickly scale their technical resources up or down, depending on the project’s needs, and plug any capability gaps inside their teams. It can be crucial to the success of businesses whose product is software, or relies on software, as it enables businesses to scale their team and projects flexibly without the risks involved with growing an in-house team. ... It allows companies to access a diverse range of skills and expertise that may not be available in-house. Companies can quickly ramp up their technical resources and tackle projects that require specialised skills or knowledge whilst onboarding engineers that can bring fresh ideas and perspectives to the project. Having access to this expertise quickly is often of paramount importance as companies compete to grow. For instance, if a company needs to design, develop, and support a mobile app, but its in-house team lacks the necessary skills and experience, it can quickly engage a team of engineers who specialise in mobile app development to work on the project. This approach can help companies save time and resources and ensure that their projects are completed on time and to a high standard.


Taking AI Commoditization Seriously

Commoditization is the process of products or services becoming “standardized, marketable objects.” Any given unit of a commodity, from corn to crude oil, is generally interchangeable with and sells for the same price as others. Commoditization of frontier models could emerge in a few ways. Perhaps, as Yann LeCun predicts, open-source models could equal or surpass closed-source performance. Or perhaps competing firms continue finding ways to match each other’s developments. Such competition has more above-board variants—top-tier engineers at different firms keeping pace with each other—and less. Consider, for instance, OpenAI’s allegations against DeepSeek of inappropriate copying. ... The emergence of new, decentralized AI threat vectors could offer the powers that be a common enemy. This might present a unique opportunity for US-China collaboration. Modern US-China collaboration has required tangible mutual interest to succeed. The most famous modern US-China agreement, the Nixon/Kissinger-Mao/Zhou normalization of US-China relations, occurred in large part to overcome a perceived common threat in the USSR. When few companies control cutting-edge frontier models, preventing third-party model misuse is comparatively simple. Fewer frontier developers imply fewer sites to monitor for malicious actors. 


Making Architecturally Significant Decisions

Architectural decisions are at the root of our practice but they are often hard to spot. The vast majority of decisions get processed at the team level and do not apply architectural thinking or have an architect involved at all. This approach can be a benefit in agile organizations if managed and communicated effectively. ... Envision an enterprise or company, then imagine all the teams in the organization working in parallel on changes, remember to add in maintenance teams and operations teams doing ‘keep the lights running’ work. ... To effectively manage decisions, the architecture team should put in place a decision management process early in its lifecycle, by making critical investments into how the organization is going to process decision point in the architecture engagement model. During the engagement methodology update and the engagement principles definition, the team will decide what levels of decisions must be exposed in the repository and their limits in duration, quality and effort. These principles will guide the decision methods for the entire team until the next methodology update. There are numerous decision methods and theories in the marketplace in making better decisions. The goal of the architecture decision repository is to ensure that decisions are made clearly, with appropriate tools and with respect for traceability.


What is predictive analytics? Transforming data into future insights

Predictive analytics draws its power from many methods and technologies, including big data, data mining, statistical modeling, ML, and assorted mathematical processes. Organizations use predictive analytics to sift through current and historical data to detect trends, and forecast events and conditions that should occur at a specific time, based on supplied parameters. With predictive analytics, organizations can find and exploit patterns contained within data in order to detect risks and opportunities. Models can be designed, for instance, to discover relationships between various behavior factors. Such models enable the assessment of either the promise or risk presented by a particular set of conditions, guiding informed decision making across various categories of supply chain and procurement events. ... Predictive analytics makes looking into the future more accurate and reliable than previous tools. As such it can help adopters find ways to save and earn money. Retailers often use predictive models to forecast inventory requirements, manage shipping schedules, and configure store layouts to maximize sales. Airlines frequently use predictive analytics to set ticket prices reflecting past travel trends. 


C-Suite Leaders Must Rewire Businesses for True AI Value

AI's true value doesn't come from incremental gains but emerges when workflows are transformed completely. McKinsey found 21% of companies using gen AI have redesigned workflows and seen significant effect on their bottom-line. Morgan Stanley redesigned client interactions by integrating AI-powered assistants. Rather than just automating document retrieval, the company embedded AI into workflows, enabling advisers to generate customized reports and insights in real time. This improved efficiency and enhanced customer experience through more data-driven, personalized interactions. Boston Consulting Group highlighted that companies embedding AI into core business workflows report 40% higher process efficiency and 25% faster output. For CIOs and AI leaders, this highlights a crucial point. Deploying AI without rethinking workflows resembles putting a turbo engine in a low-end car. The real competitive advantage comes from integrating AI into the fabric of business operations and not in standalone tasks. ... AI is becoming a core function that enhances decision-making, automates tasks and drives innovation. McKinsey's report emphasized that AI's biggest value lies in large-scale transformation, not isolated use cases. 

Daily Tech Digest - November 01, 2024

How CISOs can turn around low-performing cyber pros

When facing difficulties in both their professional and personal lives, people can start to withdraw and be less interested in contributing, even doing the bare minimum. They might also make mistakes more often or miss deadlines, or they can care less about how their colleagues or managers perceive their work. Body language can also provide insight into an employee’s emotional state and engagement level. When assigning tasks, Michelle Duval, founder and CEO at Marlee, a collaboration and performance AI for the workplace, looks her colleagues in the eyes. “Avoiding eye contact or visible sighing… are helpful clues,” she says. ... When it comes to helping employees improve their performance, the key point is to understand why they have problems in the first place and act quickly. “The best coaching depends on what type of problem you’re fixing,” says Caroline Ceniza-Levine, executive recruiter and career coach. “If the employee’s work product is suffering, they may need more direction or skills training. If the employee is disengaged, they may need help getting motivated – in this case, giving them more information around why their work matters and how important their contribution is may help.”


AI in Finserv: Predictive Analytics to Inclusive Banking

AI’s ability to synthesise vast amounts of data allows organisations to connect data from previously disparate sources, and then analyse it to detect historical patterns and deliver forward-looking insights. In the banking industry, this is happening at both a high level through traditional data analysis, and, increasingly, through more advanced AI tools including Natural Language Processing (NLP) and Machine Learning (ML). As organisations continue gathering these predictive analytics, many are also in the process of providing feedback to their AI systems which will ultimately improve their predictive accuracy over time. The main use case in which banks are currently seeing the biggest impact from AI-powered predictive insights is in forecasting consumer behaviour. ... AI-powered fraud detection algorithms can analyse vast amounts of transaction data in real-time at a scale that’s unattainable by humans. The real-time nature of these systems also allows organisations to prevent loss by intercepting anomalous transactions before they’re settled. This scalable, automatic approach also makes it easier for financial organisations to stay in compliance with relevant anti-money laundering (AML) and anti-terrorist financing regulations and avoid steep penalties.


Critical Software Must Drop C/C++ by 2026 or Face Risk

The federal government is heightening its warnings about dangerous software development practices, with the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) issuing stark warnings about basic security failures that continue to plague critical infrastructure. ... The report also states that the memory safety roadmap should outline the manufacturer’s prioritized approach to eliminating memory safety vulnerabilities in priority code components. “Manufacturers should demonstrate that the memory safety roadmap will lead to a significant, prioritized reduction of memory safety vulnerabilities in the manufacturer’s products and demonstrate they are making a reasonable effort to follow the memory safety roadmap,” the report said. “There are two good reasons why businesses continue to maintain COBOL and Fortran code at scale. Cost and risk,” Shimmin told The New Stack. “It’s simply not financially possible to port millions of lines of code, nor is it a risk any responsible organization would take.” ... Finally, it is good that CISA is recommending that companies with critical software in their care should create a stated plan of attack by early 2026, Shimmin said.


Into the Wild: Using Public Data for Cyber Risk Hunting

Threat hunting, on the contrary, is a proactive approach. It means that cyber teams go out into the wild and proactively identify potential risks and threat patterns, isolating them before they can cause any harm. A threat-hunting team requires specific knowledge and skills. Therefore, it usually consists of various professionals, such as threat analysts, who analyze available data to understand and predict the attacker's behavior; incident responders, who are ready to reduce the impact of a security incident; and cybersecurity engineers, responsible for building a secure network solution capable of protecting the network from advanced threats. These teams are trained to understand their company's IT environment, gather and analyze relevant data, and identify potential threats. Moreover, they have a clear risk escalation and communication process, which helps effectively react to threats and mitigate risks. Specialists often use a combination of tools that help in threat hunting. ... Endpoint detection and response (EDR) systems combine continuous real-time monitoring and collection of end-point data with a rule-based automated response.


How to Keep IT Up and Running During a Disaster

Using IoT sensing technology can provide early warning of disaster events and keep an eye on equipment if human access to facilities is cut off. Sensors and cameras can be helpful in determining when it may be appropriate to switch operations to other facilities or back up servers. Moisture sensors, for example, can detect whether floods may be on the verge of impacting device performance. ... In disaster-prone regions, it is advisable to proactively facilitate relationships with government authorities and emergency response agencies. This can be helpful both in ensuring continued compliance and assistance in the event of a natural disaster. “There are certain aspects of [disaster response] that need to be captured,” Miller says. “A lot of times in crisis mode, that becomes a secondary focus. But [disaster management] systems allow the tracking and the recording of that information.” Being aware of deadlines for compliance reporting and being in contact with regulators if they might be missed can save money on potential fines and penalties. And notifying emergency response agencies may result in prioritization of assistance given the economic imperatives of IT continuity.


Breaking Down Data Silos With Real-Time Streaming

Traditional "extract, transform, load" and "extract, load, transform" data pipelines have historically been the primary method for moving data into analytics. But analytics consumers have often had limited control or influence over the source data model, which is typically defined by application developers in the operational domain. Data is also often stale and outdated by the time it arrives for processing. "By shifting data processing and governance, organizations can eliminate redundant pipelines, reduce the risk and impact of bad data at its source, and leverage high-quality, continuously up-to-date data assets for both operational and analytical purposes," LaForest said. Real-time data streaming is especially crucial in sectors such as finance, e-commerce and logistics, where even a few seconds of delay can negatively impact customer satisfaction and profitability. ... Real-time data streaming is emerging as the foundation for the next wave of AI innovation. For predictive AI and pattern recognition, data needs to be available in real time to drive accurate, immediate insights. Real-time data pipelines are essential for enabling AI systems to deliver smarter, faster insights and drive more accurate decision-making across the enterprise.


Is now the right time to invest in implementing agentic AI?

What makes agentic AI autonomous or able to take actions independently is its ability to interpret data, predict outcomes, and make decisions, learning from new data — unlike traditional RPA, which falters when encountering unexpected data, said Cameron Marsh, senior analyst at Nucleus research. This adaptive nature of agentic AI, according to Chada, can help enterprises increase efficiency by handling complex, variable tasks that traditional RPA can’t manage, such as the roles of a claims adjuster, a loan officer, or a case worker, provided that it has access to the necessary data, workflows, and tools required to complete the task. ... Some platform vendors are already offering low-code and no-code agent development and management platforms, but these are limited in their functionality to building simple agents or modifying templates for agents built by the vendors themselves, analysts said. “Creating more complex agents, specifically ones that require customized integrations and nuanced decision-making abilities still demands some technical understanding of data flows, machine learning model tuning, and API integrations,” Futurum’s Hinchcliffe said, adding that there is a learning curve on these platforms and that the migration journey could be resource intensive.


How open-source MDM solutions simplify cross-platform device management

Few MDM solutions effectively address the challenge of device diversity, as most are designed to manage specific hardware or software platforms. This limitation forces businesses to juggle multiple solutions to cover their entire device ecosystem. Open-source MDM solutions, however, offer flexible, modular architectures that adapt to various operating systems and device types. Open standards and extensible APIs ensure cross-platform compatibility, from mobile devices to servers to IoT endpoints. Unified management interfaces abstract platform complexities, providing consistent administration across diverse devices, while collaboration with open-source communities broadens device support. These approaches simplify management for IT teams in heterogeneous environments, reducing the need for multiple specialized solutions. ... An effective MDM solution enhances device management in remote locations by enabling developers and administrators to create lightweight agents for low-bandwidth environments and implement platform-agnostic policies for diverse ecosystems. With custom scripts and modular components, businesses can tailor management workflows to align with specific operational demands, ensuring seamless integration across various environments. 


4 Essential Strategies for Enhancing Your Application Security Posture

Whatever the cause, the torrent of false positives wastes time, lowers security team morale, and obscures real threats. As a result, risks of a major oversight increase, and response time to actual threats slows, leading to undetected breaches, data loss, financial damage, and erosion of customer trust. ... To successfully implement shifting left, AppSec must deliver solutions that eliminate the burden of manual security tasks. The ASPM strategy is to integrate tools directly into the development environment to make security checks a seamless part of the development workflow. Such integrations would provide real-time feedback and actionable security guidance, minimizing disruptions and significantly enhancing productivity. ... One of the biggest challenges in AppSec today is tool sprawl. The wide array of tools promising to plug different security gaps burdens security teams with a complex security ecosystem that locks critical data into tool-specific silos. This data fragmentation makes it impossible for security teams to gain a holistic view of the security environment, leading to confusion and missed vulnerabilities when insights from one tool don’t correlate with insights from another.


How a classical computer beat a quantum computer at its own game

Confinement is a phenomenon that can arise under special circumstances in closed quantum systems and is analogous to the quark confinement known in particle physics. To understand confinement, let's begin with some quantum basics. On quantum scales, an individual magnet can be oriented up or down, or it can be in a "superposition"—a quantum state in which it points both up and down simultaneously. How up or down the magnet is affects how much energy it has when it's in a magnetic field. ... Serendipitously, IBM had, in their initial test, set up a problem where the organization of the magnets in a closed two-dimensional array led to confinement. Tindall and Sels realized that since the confinement of the system reduced the amount of entanglement, it kept the problem simple enough to be described by classical methods. Using simulations and mathematical calculations, Tindall and Sels came up with a simple, accurate mathematical model that describes this behavior. "One of the big open questions in quantum physics is understanding when entanglement grows rapidly and when it doesn't," Tindall says. 



Quote for the day:

"The meaning of life is to find your gift. The purpose of life is to give it away." -- Anonymous

Daily Tech Digest - June 15, 2023

The five new foundational qualities of effective leadership

Today’s leaders have to be able to establish a compelling destination and then navigate through the fog with a compass. “You have to be ready to make a decision today, realizing that you may get new data tomorrow that means you have to reverse the decision you just made,” a veteran CEO of a Fortune 25 company told us. “You have to have the courage to follow that new information. The job’s always been ambiguous. But the environment has never been this fluid.” Boards and CEOs expect succession candidates to be adept at providing direction and key performance indicators that will signal whether course adjustments are necessary. “We’re living in an age with many more discontinuities than we had a generation or two ago,” said Mark Thompson, former CEO of the New York Times Company and now board chairman of Ancestry. “It’s not about trying to find the perfect strategies. It’s more about helping organizations to be more open, flexible, and adaptable to change.” This shift demands a more dynamic, individual leadership approach, as well as a reimagining of basic organizational processes. 


5 best practices to ensure the security of third-party APIs

Maintaining an API inventory that automatically updates as code changes is an instrumental first step for an API security program, says Jacob Garrison, a security researcher at Bionic. This is an instrumental first step for an API security program; it should distinguish between first-party and third-party APIs. And it encourages continuous monitoring for shadow IT — APIs brought on board without notifying the security team. “To ensure your inventory is robust and actionable, you should track which APIs transmit business-critical information, such as personally identifiable information and payment card data,” he says. An API inventory is complementary to third-party risk management, according to Garrison. When developers utilize third-party APIs, it’s worthwhile to consider risk assessments of the vendors themselves. ... Frank Catucci, chief technology and head of security research for Invicti Security, agrees that including an inventory of third-party APIs is critical. "You need to have third-party APIs be part of your overall API inventory and you have to look at them as assets that you own, that you are responsible for," he says


Generative AI’s change management challenge

“The hardest part of AI acceptance is creating a space where employees can still add value and not feel they are competing with AI to create value,” Bellefonds added. “A lot of the work we do when it comes to change management and coaching is to help employees work with AI and at the same time, change the way they add value, so that a part of their job is taken by AI but their part refocuses on higher value-adding tasks.” Exactly how those processes are rewired and the working methods changed will vary from one enterprise to another, he said. There are other ways in which employees’ concerns about AI is unevenly distributed, too. Leaders are more likely to be optimistic, and frontline workers concerned, BCG found. And while 68% of leaders believe their companies have implemented adequate measures to ensure responsible use of AI, only 29% of their frontline employees feel that way. Despite BCG’s findings of optimism in the workforce, there’s a darker side. Over one-third of respondents think their job is likely to be eliminated by AI, and almost four-fifths want governments to step in and deliver AI-specific regulations to ensure it’s used responsibly.


As Machines Take Over — What Will It Mean to Be Human?

Biocomputing is a field of study that uses biologically-based molecules, such as DNA or proteins, to perform computational tasks. Imitating the genius of nature can completely shift the paradigm of understanding when it comes to the computation and storage of data. The field has shown promise in cryptography and drug discovery. However, biocomputers are still limited compared to non-bio computers since they aren't good at cooling themselves and doing more than two things simultaneously. Advancements in AI, however, have been booming. Since 2012, interest in AI, especially in machine learning, has been renewed, leading to a dramatic increase in funding and investment. Machine learning models ingest large amounts of data and infer patterns. More recently, generative AI has become extremely popular with the release of large AI models such as MidJourney, ChatGPT and Stable Diffusion. Generative AI is a class of AI algorithms that generate new data or content extremely similar to existing data, nearly identical to human-made data.


What is SDN and where is is going?

There are three main components to a software-defined network: controller, applications, and devices. The controller has taken over the role of the control plane on each individual network device. It populates the tables that the data planes on those devices use to do their work. There are various communication protocols that can be used for this purpose, including OpenFlow, though some vendors use proprietary protocols. Communication between the controller and devices is referred to as southbound APIs. The software controller is, in turn, managed by applications, which can fulfill any number of network administration roles, including load balancers, software-defined security services, orchestration applications, or analytics applications that keep tabs on what's going on in the network. These applications communicate with the controller (northbound APIs) through well-documented REST APIs that allow applications from different vendors to communicate with ease. 


Using Trauma-Informed Approaches in Agile Environments

Software is, by definition, very abstract. For this reason, we naturally tend to be in our heads and thoughts most of the time while at work. However, a more trauma-informed approach requires us to pay more attention to our physical state and not just to our brain and cognition. Our body and its sensations are giving us many signs, vital not just to our well-being but also to our productivity and ability to cognitively understand each other and adapt to changes. Paradoxically, in the end, paying more attention to our physical and emotional state gives us more cognitive resources to do our work. Noticing our bodily sensations at the moment, like breath or muscle tension in a particular area, can be a first step to getting out of a traumatic pattern. And a generally higher level of body awareness can help us fall less into such patterns in the first place. Simplified - our body awareness anchors us in the here and now, making it easier for us to recognize past patterns as inadequate for the current situation.


How Pyramid Thinking Can Revolutionize Your Data Strategy

Before devising a corporate data strategy, the main things you need to know are the strategy and objectives of your organization as a whole. Data can be a truly transformative tool, but even the sharpest knife needs to be used accurately to get the best results -- which is why you need to know the end goal before you can understand how data can help you achieve it. This end goal forms the very peak of the pyramid and it is by looking downwards from it that you can understand the role that data can play. For organizations struggling to pinpoint that goal (as oftentimes happens when the business strategy isn’t well-defined and documented), it is worth considering key business problems and the consequent opportunities for improvement. ... Identifying business goals gives you the basis upon which to build your data strategy, and with that you can begin to be more specific about the change you are looking to make. An actionable and measurable formula helps you shape those changes with clarity, such as “we want to do x by measuring/tracking/analyzing y in order to do z.”


Network spending priorities for second-half 2023

Security is the area where most users expect to spend more, but at the same time an area where they believe their spending is most likely to be sub-optimal. Three-quarters of buyers think they already spend too much on security because they’ve layered things on without considering the whole picture. You hear terms like “holistic approach” or “rethinking” a lot in their comments, but at the same time, less than an eighth of the users expect to redo their security strategies in any way.  ... The reasons for the seemingly mindless AI enthusiasm is a simple reversal of an old saying: “Where there’s hope, there’s life.” AI could (theoretically) reduce operator errors. It could (hopefully) improve network capacity planning. It could (presumably) help secure applications and data and spot malefactors. All these things are recurring problems that seem to defy solution, and AI offers a hope that a solution might be near at hand. What’s not to love, provisionally of course.


Biodiversity Means Business

Technology can play a key role in navigating biodiversity issues. Predictive analytics, machine learning, digital twins, blockchain and the Internet of Things can deliver insight, visibility and measurability into sourcing, supply chains and environmental impacts. However, Katic emphasizes that these tools must be used to drive real change. “They must support a paradigm shift to new, sustainable models of development, rather than entrenching business as usual. They must deliver enhanced transparency and accountability,” she says. Ultimately, companies must imbed biodiversity deep into their business strategies and daily operations, Katic says. This includes the use of science based methods that revolve around the UN’s Sustainable Development Goals and its Global Biodiversity Framework. It can also incorporate tools such as the S&P’s scoring system, part of its UN-linked GlobalSustainable1 initiative, which provides dependency scores, ecosystem footprint insights, and other biodiversity data that can guide decision-making. In addition, the SBTN framework can serve as a valuable resource. More than 200 organizations helped shape the initial set of methods, tools, and guidance.


5 roadblocks to Rust adoption in embedded systems

Rust is not a trivial language to learn. While it does share common ideas and concepts with many of the languages that came before it, including C, the learning curve is steeper. When a company looks to adopt a new language, they hire engineers who already know the technology or are forced to train their team. Teams interested in using Rust for embedded will find themselves in a small, niche community. Within this community, not many qualified embedded software engineers know Rust. That means paying a premium for the few developers who know Rust or investing in training the existing internal team. Training a team to use Rust isn’t a bad idea. Every company and developer should be investing in themselves constantly. Our field changes so rapidly that you’ll quickly get left behind if you don’t. However, switching from one programming language to another must provide a return on investment for the company. Especially when switching to an immature language like Rust. 



Quote for the day:

"Don't focus so much on who is following you, that you forget to lead." -- E'yen A. Gardner

Daily Tech Digest - March 28, 2023

Predictive network technology promises to find and fix problems faster.

The emerging field of neuromorphic computing, based on a chip architecture that's engineered to mimic human brain structure, promises to provide highly effective ML on edge devices. "Predictive network technology is so powerful because of its ability to intake signals and make accurate predictions about equipment failures to optimize maintenance," says Gil Dror, CTO at monitoring technology provider SmartSense. He says that neuromorphic computing will become even more powerful when it moves from predictive to prescriptive analytics, which recommends what should be done to ensure future outcomes. Neuromorphic computing's chip architecture is geared toward making intelligent decisions on edge devices themselves, Dror says. "The combination of these two technologies will make the field of predictive network technology much more powerful," he says. Organizations including IBM, Intel, and Qualcomm are developing neuromorphic computing technologies. 


What Wasm Needs to Reach the Edge

As of today, WASM is very much present in the browser. It is also rapidly being used for backend server applications. And yet, much work needs to be done as far as getting to the stage where applications can reach the edge. The developer probably does not care that much — they just want their applications to run well and security wherever they are accessed, without wondering so much about why edge is not ready yet but when it will be. Indeed, the developer might want to design one app deployed through a WebAssembly module that will be distributed across a wide variety of edge devices. Unlike years past when designing an application for a particular device could require a significant amount of time to reinvent the wheel for each device type, one of the beautiful things about WASM — once standardization is in place — is for the developer to create a voice-transcription application that can run not only on a smartphone or PC but in a minuscule edge device that can be hidden in a secret agent’s clothing during a mission. 


5 hard questions every IT leader must answer

Most of the voluminous academic literature on leadership focuses on the traits/idiosyncrasies of the individual leader and not on their relationships with key associates. As an IT leader, do you have a track record of helping or hindering colleagues in fulfilling their career objectives? Vince Kellen, a digital force of nature and CIO at University of California San Diego, borrows insights from NHL scouts. He is looking for IT “skaters” who, when they step onto the ice, make the other four teammates better hockey players. How leaders view themselves and others and how they are viewed by others is a critical causal driver of leadership success or failure. Tony Blair was able to reverse a multi-decade decline in Labour Party electoral success when he realized, “People judge us on their instincts about what they believe our instincts to be. And that man polishing his car was clear: His instincts were to get on in life, and he thought our instincts were to stop him.” 


KPIs for a Chief Information Security Officer (CISO)

Many might think the finances of a company would be the sole responsibility of the chief financial officer and their team. However, the CISO is also responsible for returns on any investments in information security. This is a crucial benchmark for a CISO. They’re responsible for the organization gaining value from new security technology investments and security policies while keeping costs down. They must also maintain a productive department — which in financial terms means valuable — and a training program worth investing in (CISO-Portal, 2021). While CISOs are responsible for security, they also must consider the financial impact on the business if a cyberattack occurs. An estimated recovery budget should be put in place to prepare for the potential financial impact of the attack. The actual cost should be equal to or less than the budgeted total and include direct costs, indirect costs, and possible fines (Castellan). One key metric CISOs can use to gauge security team effectiveness is IT security staff job satisfaction.


Microsoft Security Copilot harnesses AI to give superpowers to cybersecurity fighters

With Microsoft Security Copilot, defenders can respond to incidents within minutes, get critical step-by-step guidance through natural language-based investigations, catch what would otherwise go undetected, and get summaries of any process or event. Security professionals will be able to utilize the prompt bar to ask for summaries on vulnerabilities, incidents in the enterprise, and even more information on specific links and files. Using generative AI and both internal and external organizational information, Copilot generates a response with reference to sources. Like most AI models, it won't always perform perfectly and it can make mistakes. However, Security Copilot works in a closed-loop learning system that includes a built-in tool for users to directly provide feedback. And while at launch it will incorporate Microsoft's security products, the company claims that over time it will "expand to a growing ecosystem of third-party products" as well. 


Plugging the cybersecurity skills gap to retain security professionals

Worryingly (but entirely unsurprisingly), any organisation facing a cyber skill gap is much more susceptible to breaches. Indeed, industry body ISACA found that 69% of those organisations that have suffered a cyber-attack in the past year were somewhat or significantly understaffed. What truly compounds these concerns, however, is the potential impact that breaches can have. According to IBM’s Cost of a Data Breach Report 2022, the average total cost of a data breach is now $4.35 million. This combination of statistics is undoubtedly anxiety-inducing. However, attacks aren’t a lost cause or an inevitability which simply can’t be prevented. ... It should be noted that, at least in most cases, organisations are not doing this to eliminate the need for cybersecurity workers altogether. Artificial intelligence is nowhere near the level of sophistication required to achieve this in a security context. And really, it’s unlikely that human input won’t ever be required, at least in some capacity.


Manufacturing is the most targeted sector by cyberattacks. Here's why increased security matters

One of the manufacturing sector’s main struggles is having a fragmented approach to managing cyber-related issues. In the European Union, a new legislative proposal, the Cyber Resilience Act, is being discussed to introduce the mandatory cybersecurity requirements for hardware and software products throughout their lifecycle. Moreover, the new NIS 2 and Critical Entities Resilience (CER) directives classify certain manufacturing industries as important or “essential entities,” requiring them to manage their security risks and prevent or minimize the impact of incidents on recipients of their services. In the United States, various federal regulations have been imposed on specific sectors like water, transportation and pipelines and a national cybersecurity strategy was recently released. The International Electrotechnical Commission’s IEC 62443 is considered by many to be the primary cybersecurity standard for industrial control systems but it is complex. 


Tony McCandless – The role of generative AI in intelligent automation

Firstly, there are a lot of financial services companies that are at the forefront of customer experience, to some degree, because they’ve got particular products to sell that lend themselves well to AI. They can implement capabilities like data analytics in order to know whether a customer is likely to buy a certain product. And then, they can reach out to companies like us and utilise a choice of generative AI-powered scenarios. This looks set to continue evolving. I think as well, in areas like citizen services — certainly from a UK perspective — most councils are really cash strapped, and are having to make critical decisions about service provision to citizens. There is also a digital access gap that we have to focus on closing. While some councils are proving good at addressing this, others potentially need a bit more investment, and collaboration. We’ve got 10 grandkids, and you should see a couple of the younger ones with technology — their ability to pick up a tablet, without knowing what a keyboard is, is just mind blowing.


Q&A: Cisco CIO Fletcher Previn on the challenges of a hybrid workplace

Our policy around hybrid work is that we want the office to be a magnet and not a mandate. In all likelihood, the role of the office is for most people not going to be a place where you go eight hours a day to do work. It’s going to be a place where we occasionally gather for some purpose. And, so as a result, we’re not mandating any particular prescriptive for how many days people should be in the office. It’s totally based on the type of work teams do, how collaborative that works needs to be, does it really benefit from people being together, or is it really individual work. And that’s really best determined at the individual team level than any sort of an arbitrary formula. The value of being in the office is proportionate to the number of other people who are also in the office at the same time you’re there. So, these things tend to be more about gathering for a team meeting, a client briefing, a white boarding session and the like. When everybody was remote, it was a great equalizer because everyone was on a similar footing.


Pursuing Nontraditional IT Candidates: Methods to Expand Talent Pipelines

Felicia Lyon, principal, human capital advisory for KPMG, says developing a strategy for nontraditional hires should start with leadership setting forth a vision for talent that is inclusive and skill-based. “Execution of that strategy will require involvement from stakeholders that span the entire organization,” she explains. “Business stakeholders should work closely with HR to identify roles that will be a good fit.” She adds that while there is a tendency to start small via pilot programs, research has shown that cohort programs are more efficient. “Companies should also look to external partners like apprenticeship programs and community colleges who can help them build a capability around successfully supporting and developing non-traditional talent,” Lyon says. Watson explains Clio uses many overlapping programs to widen the net of candidates in technical roles. “Our talent acquisition team helps identify opportunities to recruit non-traditional areas of available talent,” he says. 



Quote for the day:

"If I have seen farther than others, it is because I was standing on the shoulder of giants." -- Isaac Newton

Daily Tech Digest - February 23, 2023

Trends in Data Governance in 2023: Maturation Toward a Service Model

Organizations will increasingly adopt a Data Governance service model as they increase implementations of AI technologies. The “EU and U.S. plan to impose new regulations to protect consumers and impact how algorithms can ingest, use, transform, and make recommendations based on datasets. Companies have a short time to ramp up their Data Governance responses to AI because many algorithms adjust inputs and outputs in real time. Organizations need more Data Governance preparation, as only 30% of a McKinsey AI study respondents recognized potential legal risks as relevant. The firms, blinded to the importance of AI regulations, will face increased pressure to adapt their Data Governance approaches by the end of 2023. EU’s draft AI regulations promise to impose more considerable fines on companies who fail to comply, 6% of their global revenue, instead of the 4% levied by the GDPR. Consequently, worker adoption of Data Governance updates, in preparation for AI regulations, with their engagement and feedback, will play a crucial role in 2023. 


Sci-fi magazine halts new submissions after a surge in AI-written stories

Clarke acknowledged there are tools available for detecting plagiarized and machine-written text, but noted they are prone to false negatives and positives. OpenAI recently released a free classifier tool to detect AI-generated text, but also noted it was "imperfect" and it was still not known whether it was actually useful. The classifier correctly identifies 26% of AI-written text as "likely AI-written" -- its true positive rate. It incorrectly identifies human-written text as AI-written 9% of the time -- its false positive. Clarke outlines a number of approaches that publishers could take besides implementing third-party detection tools, which he thinks most short fiction markets can't currently afford. Other techniques could include blocking submissions over a VPN or blocking submissions from regions associated with a higher percentage of fraudulent submissions. "It's not just going to go away on its own and I don't have a solution. I'm tinkering with some, but this isn't a game of whack-a-mole that anyone can "win." The best we can hope for is to bail enough water to stay afloat," wrote Clarke.


Pairing AI with Tech Pros: A Roadmap to Successful Implementation

“The technology can also automatically check the quality and interpret data where metadata is not available, interpret tabular data and summarize them with natural text and jointly interpret image, text, and tabular data,” he says. Krishna cautions that while generative AI has exciting potential, the recent focus on the technology has also reinforced the importance of responsible AI. “Going forward, organizations will be using AI methodologies to make decisions for their customers, employees, vendors and everyone associated with them,” he says. “A responsibility charter needs to be sponsored by C-suite leaders and developed through dynamic and consistent discussions led by the leaders in compliance, risk and data analytics.” Lo Giudice adds it is important for organizational leaders and IT workers, for example software developers, to come together and decide which AI-based tools could be deployed and the strategy behind that deployment. “Developers are influencers of this, because if they get excited about it, it will win,” he says. 


Platforming the Developer Experience

With intuitive, self-service workflows and all the tools developers need, they rarely, if ever, have to think about ‘the how’ of getting their software into the hands of users. And this works if and when an organization does at least a couple of things right: The organization prioritizes the developer experience and empowers other parts of the organization to answer the question, How can we create the optimal developer experience? The organization puts resources behind understanding and building the best developer experience–and that’s where both the developer platform and DevOps teams as “fixers” ideas emerge. Does this mean the “optimal experience” can’t be optimized? Does that mean developers cannot have input into their own (or more general) developer experience(s)? No. In fact, part of what makes the developer platform idea compelling is that developers don’t have to weigh in or make decisions on the platform or tooling. Still, it’s possible to let them have that freedom if the team or organization wants to. Bottom line: There is no one-size-fits-all developer platform any more than there is a single developer experience. 


How IT professionals can change careers to cyber security

While most IT professionals will have these skills on a basic level, many will only understand them as needed for their own day to day work, Teale says. Therefore, additional training is sometimes necessary. Many IT professionals may not need to fork out for a cyber security degree although certifications might be a helpful way forward. Basic foundational books and courses can offer some guidance, and an apprenticeship or course from a certified body might make sense for IT professionals who are looking to switch early in their careers, Finch says. ... There are a number of entry level courses available, such as CISMP or CompTIA, says Freha Arshad, managing director, Accenture Security in the UK. “All of the major cloud service providers offer security courses for varied levels and skill sets. With enterprises increasingly focused on the cloud, this area is also a good place to start.” In addition, says McQuade, there are free resources online to support self-learning: “HackXpert and TryHackMe provide training labs, while Cybrary offers a library of helpful videos, labs and training exams. ...”


CISOs struggle with stress and limited resources

The lack of bandwidth and resources is not only impacting CISOs, but their teams as well. ... Relentless stress levels are also affecting recruitment efforts with 83% of CISOs admitting they have had to compromise on the staff they hire to fill gaps left by employees who have quit their job. More than a third of the CISOs surveyed said they are either actively looking for or considering a new role. “The results from our mental health survey are devastating but it’s not all doom and gloom. Our research found that CISOs know exactly what they need to reduce stress levels: more automated tools to manage repetitive tasks, better training, and the ability to outsource some work responsibilities,” said Eyal Gruner, CEO, Cynet. “One of the most eye-opening insights from the report was the fact that more than 50% of the CISOs we surveyed said consolidating multiple security technologies on a single platform would decrease their work-related stress levels,” Gruner added.


Making Risk Management for Agile Projects Effective

Agile claims to be risk-driven and based on its implicit practices—it lends itself to an adaptive risk management style. For instance, the adaptability of sprint planning is a response to uncertainty, “biting off a small chunk at a time” to eventually deliver the finished solution. Due to its inherent nature, Agile can mitigate some risk that occurs during the sprint cycle, but this is not the only risk that may occur during a project’s lifespan. For example, in larger enterprises, there is more risk related to the external, organizational and project environments, including corporate reputation, project financing, user adoption of business changes and regulatory compliance. Management of this type of “project” risk is not addressed in most Agile literature, which focuses on risk that may occur at the sprint level. One recent proposal to address this limitation is to adopt an Agile risk management process that includes tailoring Agile methodologies to include project and enterprise risk management approaches in line with the risk context for the project.


Robotic Process Automation: Confluence of Automation and AI

According to Deloitte, it can lead to improved service, fewer mistakes, increased audibility, increased productivity, and lower costs. It makes it possible to have a workforce that is automated in a variety of ways around the clock. More sophisticated tools are taking the place of the outdated methods that relied on Excel sheets and macros. Additionally, functions like dashboarding, workflow, and proactive system and process monitoring are becoming increasingly important components of technology infrastructures thanks to these new tools. Additionally, these “new” tools frequently need to interact with older systems, which is not possible alway. To extract, format, shape, and distribute the data in a way that a downstream system can consume, necessitates human interaction. This process is being automated with RPA in a more controlled, efficient, and less labor-intensive manner. RPA bots can, for the sake of simplicity, completely automate human actions like opening files, entering data, and copy-pasting fields.


The Future of Network Security: Predictive Analytics and ML-Driven Solutions

ML-driven network security solutions in cybersecurity refer to the use of self-learning algorithms and other predictive technologies (statistics, time analysis, correlations etc.) to automate various aspects of threat detection. The use of ML algorithms is becoming increasingly popular for scalable technologies due to the limitations present in traditional rule-based security solutions. This results in the processing of data through advanced algorithms that can identify patterns, anomalies, and other subtle indicators of malicious activity, including new and evolving threats that may not have known bad indicators or existing signatures. Detecting known threat indicators and blocking established attack patterns is still a crucial part of overall cyber hygiene. However, traditional approaches using threat feeds and static rules can become time-consuming when it comes to maintaining and covering all the different log sources. In addition, Indicators of Attack (IoA) or Indicators of Compromise (IoC) may not be available at the time of an attack or are quickly outdated.


1 in 4 CISOs Wants to Say Sayonara to Security

CISOs aren't necessarily running down alerts constantly the way their employees are, but they're overloaded with other career fatigue factors. "CISOs are constantly trying to balance high expectations against an absence of the tools needed to meet those expectations," Gartner analysts wrote in the prediction piece. "Compliance-centric cybersecurity programs, significantly low executive support, and subpar industry-level maturity are all indicators of an organization that does not view security risk management as critical to business success." One of the big factors that could have CISOs reconsidering their career trajectory in cybersecurity altogether is the fear about what will happen to their professional reputation if their company gets breached, says Diana Kelley, a veteran cybersecurity executive and co-founder and CSO of Cybrize, a cybersecurity workforce planning platform. She says CISOs and CSOs worry about "having their name dragged through the mud" after a breach, or even facing criminal charges, which feels more possible in the fallout from the conviction of Uber's Joe Sullivan last year.



Quote for the day:

"Leadership is a two-way street, loyalty up and loyalty down." -- Grace Murray Hopper