Showing posts with label ITSM. Show all posts
Showing posts with label ITSM. Show all posts

Daily Tech Digest - June 23, 2025


Quote for the day:

"Sheep are always looking for a new shepherd when the terrain gets rocky." -- Karen Marie Moning


The 10 biggest issues IT faces today

“The AI explosion and how quickly it has come upon us is the top issue for me,” says Mark Sherwood, executive vice president and CIO of Wolters Kluwer, a global professional services and software firm. “In my experience, AI has changed and progressed faster than anything I’ve ever seen.” To keep up with that rapid evolution, Sherwood says he is focused on making innovation part of everyday work for his engineering team. ... “Modern digital platforms generate staggering volumes of telemetry, logs, and metrics across an increasingly complex and distributed architecture. Without intelligent systems, IT teams drown in alert fatigue or miss critical signals amid the noise,” he explains. “What was once a manageable rules-based monitoring challenge has evolved into a big data and machine learning problem.” He continues, saying, “This shift requires IT organizations to rethink how they ingest, manage, and act upon operational data. It’s not just about observability; it’s about interpretability and actionability at scale. ... CIOs today are also paying closer attention to geopolitical news and determining what it means for them, their IT departments, and their organizations. “These are uncertain times geopolitically, and CIOs are asking how that will affect IT portfolios and budgets and initiatives,” Squeo says.


Clouded judgement: Resilience, risk and the rise of repatriation

While the findings reflect growing concern, they also highlight a strategic shift, with 78% of leaders now considering digital sovereignty when selecting tech partners, and 68% saying they will only adopt AI services where they have full certainty over data ownership. For some, the answer is to take back control. Cloud repatriation is gaining some traction, at least in terms of mindset, but as yet, this is not translating into a mass exodus from the hyperscalers. And yet, calls for digital sovereignty are getting louder. In Europe, the Euro-Stack open letter has reignited the debate, urging policymakers to champion a competitive, sovereign digital infrastructure. But while politics might be a trigger, the key question is not whether businesses are abandoning cloud (most aren’t) but whether the balance of cloud usage is changing, driven as much by cost as performance needs and rising regulatory risks. ... “Despite access to cloud cost-optimisation teams, there was limited room to reduce expenses,” says Jonny Huxtable, CEO of LinkPool. After assessing bare-metal and colocation options, LinkPool decided to move fully to Pulsant’s colocation service. The company claims the move achieved a 90% to 95% cost reduction alongside major performance improvements and enhanced disaster recovery capabilities.


Cookie management under the Digital Personal Data Protection Act, 2023

Effective cookie management under the DPDP Act, as detailed in the BRDCMS, requires real time updates to user preferences. Users must have access to a dedicated cookie preferences interface that allows them to modify or revoke their consent without undue complexity or delay. This interface should be easily accessible, typically through privacy settings or a dedicated cookie management dashboard. The real-time nature of these updates is crucial for maintaining compliance with the principles of consent as enshrined under the DPDP Act. When a user withdraws consent for specific cookie categories, the system must immediately cease the collection and processing of data through those cookies, ensuring that the user’s privacy preferences are respected without delay. Transparency is one of the fundamental pillars of the DPDP Act and extends to cookie usage disclosure. While the DPDP Act itself remains silent on specific cookie policies, the BRDCMS mandates the provision of a clear and accessible cookie policy. Organisations must provide clear and accessible cookie policies which outline the purposes of cookie usage, the data sharing practices and the implications of different consent choices. The cookie policy serves as a comprehensive resource enabling users to make informed decisions of their consent preferences. 


AI agents win over professionals - but only to do their grunt work, Stanford study finds

According to the report, the majority of workers are ready to embrace agents for the automation of low-stakes and repetitive tasks, "even after reflecting on potential job loss concerns and work enjoyment." Respondents said they hoped to focus on more engaging and important tasks, mirroring what's become something of a marketing mantra among big tech companies pushing AI agents: that these systems will free workers and businesses from drudgery, so they can focus on more meaningful work. The authors also noted "critical mismatches" between the tasks that AI agents are being deployed to handle -- such as software development and business analysis -- and the tasks that workers are actually looking to automate. ... The study could have big implications for the future of human-AI collaboration in the workplace. Using a metric that they call the Human Agency Scale (HAS), the authors found "that workers generally prefer higher levels of human agency than what experts deem technologically necessary." ... The report further showed that the rise of AI automation is causing a shift in the human skills that are most valued in the workplace: information-processing and analysis skills, the authors said, are becoming less valuable as machines become increasingly competent in these domains, while interpersonal skills -- including "assisting and caring for others" -- is more important than ever.


New OLTP: Postgres With Separate Compute and Storage

The traditional methods for integrating databases are complex and not suited to AI, Xin said. The challenge lies in integrating analytics and AI with transactional workloads. Consider what developers would do when adding a feature to a code base, Xin said in his keynote address at the Data + AI Summit. They’d create a new branch of the codebase and make changes to the new branch. They’d use that branch to check bugs, perform testing and so on. Xin said creating a new branch is an instant operation. What’s the equivalent for databases? You only clone your production databases. It might take days. How do you set up secure networking? How do you create ETL pipelines and log data from one to another? ... Streaming is now a first-class citizen in the enterprise, Mohan told me. The separation of compute and storage makes a difference. We are approaching an era when applications will scale infinitely, both in terms of the number of instances and their scale-out capabilities. And that leads us to new questions about how we start to think about evaluation, observability and semantics. Accuracy matters. ... ADP may have the world’s best payroll data, Mohan said, but then that data has to be processed through ETL into an analytics solution like Databricks. Then comes the analytics and the data science work. The customer has to perform a significant amount of data engineering work and preparation.


Can AI Save Us from AI? The High-Stakes Race in Cybersecurity

Reluctant executives and budget hawks can shoulder some of the responsibility for slow AI adoption, but they’re hardly the only barriers. Increasingly, employees are voicing legitimate concerns about surveillance, privacy and the long-term impact of automation on job security. At the same time, enterprises may face structural issues when it comes to integration: fragmented systems, a lack of data inventory and access controls, and other legacy architectures can also hinder the secure integration and scalability of AI-driven security solutions. Meanwhile, bad actors face none of these considerations. They have immediate, unfettered access to open-source AI tools, which can enhance the speed and force of an attack. They operate without AI tool guardrails, governance, oversight or ethical constraints. ... Insider threat detection is also maturing. AI models can detect suspicious behavior, such as unusual access to data, privilege changes or timing inconsistencies, that may indicate a compromised account or insider threat. Early adopters, such as financial institutions, are using behavioral AI to flag synthetic identities by spotting subtle deviations that traditional tools often lack. They can also monitor behavioral intent signals, such as a worker researching resignation policies before initiating mass file downloads, providing early warnings of potential data exfiltration.


The complexities of satellite compute

“In cellular communications on the ground, this was solved a few decades ago. But doing it in space, you have to have the computing horsepower to do those handoffs as well as the throughput capability.” This additional compute needs to be in "a radiation tolerant form, and in such a way that they don't consume too much power and generate too much heat to cause massive thermal problems on the satellites." In LEO, satellites face a barrage of radiation. "It's an environment that's very rich in protons," O'Neill says. "And protons can cause upsets in configuration registers, they can even cause latch-ups in certain integrated circuits." The need to be more radiation tolerant has also pushed the industry towards newer hardware as, the smaller the process node, the lower the operating voltage. "Reducing operating voltage makes you less susceptible to destructive effects," O'Neill explains. One issue, a single event latch up, sees the satellite conduct a lot of current from power to ground through the integrated circuit, potentially frying it. ... Modern integrated circuits are a lot less susceptible to these single-event latch-ups, but are not completely immune. "While the core of the circuit may be operating at a very low voltage, 0.7 or 0.8 volts, you still have I/O circuits in the integrated circuit that may be required to interoperate with other ICs at 3.3 volts or 2.5 volts," O'Neill adds.


How CISOs can justify security investments in financial terms

A common challenge we see is the absence of a formal ERM program, or the fragmentation of risk functions, where enterprise, cybersecurity, and third-party risks are evaluated using different impact criteria. This lack of alignment makes it difficult for CISOs to communicate effectively with the C-suite and board. Standardizing risk programs and using consistent impact criteria enables clearer risk comparisons, shared understanding, and more strategic decision-making. This challenge is further exacerbated by the rise of AI-specific regulations and frameworks, including the NIST AI Risk Management Framework, the EU AI Act, the NYC Bias Audit Law, and the Colorado Artificial Intelligence Act. ... Communicating security investments in clear, business-aligned risk terms—such as High, Medium, or Low—using agreed-upon impact criteria like financial exposure, operational disruption, reputational harm, and customer impact makes it significantly easier to justify spending and align with enterprise priorities. ... In our Virtual CISO engagements, we’ve found that a risk-based, outcome-driven approach is highly effective with executive leadership. We frame cyber risk tolerance in financial and operational terms, quantify the business value of proposed investments, and tie security initiatives directly to strategic objectives. 


From fear to fluency: Why empathy is the missing ingredient in AI rollouts

In the past, teams had time to adapt to new technologies. Operating systems or enterprise resource planning (ERP) tools evolved over years, giving users more room to learn these platforms and acquire the skills to use them. Unlike previous tech shifts, this one with AI doesn’t come with a long runway. Change arrives overnight, and expectations follow just as fast. Many employees feel like they’re being asked to keep pace with systems they haven’t had time to learn, let alone trust. A recent example would be ChatGPT reaching 100 million monthly active users just two months after launch. ... This underlines the emotional and behavioral complexity of adoption. Some people are naturally curious and quick to experiment with new technology while others are skeptical, risk-averse or anxious about job security. ... Adopting AI is not just a technical initiative, it’s a cultural reset, one that challenges leaders to show up with more empathy and not just expertise. Success depends on how well leaders can inspire trust and empathy across their organizations. The 4 E’s of adoption offer more than a framework. They reflect a leadership mindset rooted in inclusion, clarity and care. By embedding empathy into structure and using metrics to illuminate progress rather than pressure outcomes, teams become more adaptable and resilient.


Why networks need AIOps and predictive analytics

Predictive Analytics – a key capability of AIOps – forecasts future network performance and problems, enabling early intervention and proactive maintenance. Further, early prediction of bottlenecks or additional requirements helps to optimise the management of network resources. For example, when organisations have advance warning about traffic surges, they can allocate capacity to prevent congestion and outages, and enhance overall network performance. A range of mundane tasks, from incident response to work order generation to network configuration to proactive IT health checks and maintenance scheduling, can be automated with AIOps to reduce the load on IT staff and free them up to concentrate on more strategic activities. ... When traditional monitoring tools were unable to identify bottlenecks in a healthcare provider’s network that was seeing a slowdown in its electronic health records (EHR) system during busy hours, a switch to AIOps resolved the problem. By enabling observability across domains, the system highlighted that performance dipped when users logged in during shift changes. It also predicted slowdowns half an hour in advance and automatically provisioned additional resources to handle the surge in activity. The result was a 70 percent reduction in the most important EHR slowdowns, improvement in system responsiveness, and freeing up of IT human resources.

Daily Tech Digest - April 14, 2025


Quote for the day:

"Power should be reserved for weightlifting and boats, and leadership really involves responsibility." -- Herb Kelleher



The quiet data breach hiding in AI workflows

Prompt leaks happen when sensitive data, such as proprietary information, personal records, or internal communications, is unintentionally exposed through interactions with LLMs. These leaks can occur through both user inputs and model outputs. On the input side, the most common risk comes from employees. A developer might paste proprietary code into an AI tool to get debugging help. A salesperson might upload a contract to rewrite it in plain language. These prompts can contain names, internal systems info, financials, or even credentials. Once entered into a public LLM, that data is often logged, cached, or retained without the organization’s control. Even when companies adopt enterprise-grade LLMs, the risk doesn’t go away. Researchers found that many inputs posed some level of data leakage risk, including personal identifiers, financial data, and business-sensitive information. Output-based prompt leaks are even harder to detect. If an LLM is fine-tuned on confidential documents such as HR records or customer service transcripts, it might reproduce specific phrases, names, or private information when queried. This is known as data cross-contamination, and it can occur even in well-designed systems if access controls are loose or the training data was not properly scrubbed.


The Rise of Security Debt: Your Security IOUs Are Due

Despite measurable improvements, security debt — defined as flaws that remain unfixed for more than a year after discovery — continues to put enterprises at risk. Security debt impacts almost three-quarters (74.2%) of organizations, up from 71% in previous measurements. More frighteningly, half of all organizations suffer from critical security debt: a dangerous combination of high-severity, long-unresolved flaws. There's a reason it is described as critical debt: the longer a security flaw survives within an enterprise, the less likely it will be resolved. Today, more than a quarter (28%) of flaws remain open two years after discovery, and even after five years, 9% of flaws still linger in applications. ... Applications are only as secure as the code used to write them, and security flaws are a fact of life in every code base in the world. That being said, the origin of the code that is being used matters. Leveraging third-party code has become standard practice across the industry, which introduces added risks. ... organizations need the ability to correlate and contextualize findings in a single view to prioritize their backlog based on context. This allows companies to reduce the most risk with the least effort. Since the average time to fix flaws has increased dramatically, programs seeking to improve their security posture must focus on the findings that matter most in their specific context. 


How to Cut the Hidden Costs of IT Downtime

"Workers struggling with these problems waste productive time waiting for fixes," said Ryan MacDonald, CTO at Liquid Web. Businesses can reduce these costs by investing in proactive IT support, automating troubleshooting processes, and training workers on best practices to prevent repeat problems, he said. MacDonald explained that while tech failures are inevitable, companies often take a reactive rather than proactive approach to IT. Instead of addressing persistent issues at their root, organizations frequently apply short-term fixes, resulting in continuous inefficiencies and mounting expenses. ... Companies that fail to modernize their systems will continue to experience recurring IT problems that hinder productivity and increase operational costs. In addition to upgrading infrastructure, organizations must conduct regular IT audits to proactively identify inefficiencies before they escalate into major disruptions. MacDonald stressed the importance of continuous evaluation. "Regularly scheduled IT audits allow companies to find recurring inefficiencies and invest money into fixing them before they become costly disruption points," he said. Rather than waiting for issues to break, businesses should implement proactive IT strategies, which can save time, reduce financial losses, and improve overall system reliability.


A multicloud experiment in agentic AI: Lessons learned

At its core, an agentic AI system is a self-governing decision-making system. It uses AI to assign and execute tasks autonomously, responding to changing conditions while balancing cost, performance, resource availability, and other factors. I wanted to leverage multiple public cloud platforms harmoniously. The architecture would have to be flexible enough to balance cloud-specific features while achieving platform-agnostic consistency. ... challenges with interoperability, platform-specific nuances, and cost optimization remain. More work is needed to improve the viability of multicloud architectures. The big gotcha is that the cost was surprisingly high. The price of resource usage on public cloud providers, egress fees, and other expenses seemed to spring up unannounced. Using public clouds for agentic AI deployments may be too expensive for many organizations and push them to cheaper on-prem alternatives, including private clouds, managed services providers, and colocation providers. I can tell you firsthand that those platforms are more affordable in today’s market and provide many of the same services and tools. This experiment was a small but meaningful step toward realizing a future where cloud environments serve as dynamic, self-managing ecosystems.


What boards want and don’t want to hear from cybersecurity leaders

A lack of clarity can lead to either oversharing technical details or not providing enough strategic context. Paul Connelly, former CISO turned board advisor, independent director and mentor, finds many CISOs focus too heavily on metrics while the board is looking for more strategic insights. The board doesn’t need to know the results of your phishing test, says Connelly. Boards are focused on risks the organization faces, strategies to address these risks, progress updates, obstacles to success, and whether they’re tackling the right things. “I coach CISOs to study their board — read their bios, understand their background, and understand the fiduciary responsibility of a board,” he says. The goal is to understand the make-up of the board and their priorities and channel their metrics into risk and threat analysis for the business. Using this information, CISOs can develop a story about their program aligned with the business. “That high-level story — supported by measurements — is what boards want to hear, not a bunch of metrics on malicious emails and critical patches or scary Chicken Little-type of threats,” Connelly tells CSO. However, it’s not a one-way interaction, yet many CISOs are engaging with boards that lack the appropriate skills and understanding to foster meaningful discussions on cyber threats. “Very few boards have any directors with true expertise in technology or cyber,” says Connelly.


The future of insurance is digital, intelligent, and customer-first

The Indian insurance sector is undergoing transformative changes, driven by insurtech innovations, personalised policies, and efficient claim settlements. Reliance General Insurance leads this evolution by integrating AI, data science, and automation to enhance customer experiences. According to Deloitte, 70% of Central European insurers have recently partnered with insurtech, with 74% expressing satisfaction, highlighting the global trend of technological collaboration. Emphasising innovation, speed, and customer-centric measures, the industry aims to demystify insurance, boost its adoption, and eliminate service hindrances, steering towards a technology-oriented future. ... Protecting our customer’s data is essential at Reliance General Insurance. To avoid the misuse of the customer information, the company employs a strong multi-layered security framework involving encryption, threat intelligence services, and real-time monitoring. To help mitigate these risks, we also offer cyber insurance products.  ... As much as self-regulatory innovation evokes progressive strides, risk management becomes paramount in the adoption of insurtech solutions. Seamlessly integrating new technologies is the objective, and Reliance General employs constant feedback monitoring to ensure new technologies meet security and regulatory standards.


Examining the business case for multi-million token LLMs

As enterprises weigh the costs of scaling infrastructure against potential gains in productivity and accuracy, the question remains: Are we unlocking new frontiers in AI reasoning, or simply stretching the limits of token memory without meaningful improvements? This article examines the technical and economic trade-offs, benchmarking challenges and evolving enterprise workflows shaping the future of large-context LLMs. ... Increasing the context window also helps the model better reference relevant details and reduces the likelihood of generating incorrect or fabricated information. A 2024 Stanford study found that 128K-token models reduced hallucination rates by 18% compared to RAG systems when analyzing merger agreements. However, early adopters have reported some challenges: JPMorgan Chase’s research demonstrates how models perform poorly on approximately 75% of their context, with performance on complex financial tasks collapsing to near-zero beyond 32K tokens. Models still broadly struggle with long-range recall, often prioritizing recent data over deeper insights. This raises questions: Does a 4-million-token window truly enhance reasoning, or is it just a costly expansion of memory? How much of this vast input does the model actually use? And do the benefits outweigh the rising computational costs?


IT compensation satisfaction at an all-time low

“We’re going through a leveling of the economy right now,” Sutton said, adding that during difficult business periods employees crave consistency and reliability. “There is a little bit of satisfaction and contentment with what is seen as a stable role.” Industry observers also said that although money is a critical factor in how appreciated employees feel, unhappiness with one’s IT role is often a result of other factors, such as changing job descriptions and a general lack of job security. “Compensation is not the only tool enterprises have to improve employee experience and satisfaction. Enterprises can make sure that their employees are focused on work that excites them and they can see the value of,” Forrester’s Mark said. “Provide ample opportunities for upskilling in line not just with the technology strategy, but also with employees’ career aspirations. Ensure that employees feel empowered and have autonomy over decisions which impact them, and of course manage work-life balance, demonstrating that organizations do not simply value the work outputs, but the employees themselves as unique individuals.” Matt Kimball, VP and principal analyst for Moor Insights and Strategy, agreed that employee sentiment goes well beyond salary and bonuses.


Amazon Gift Card Email Hooks Microsoft Credentials

The Cofense Phishing Defense Center (PDC) has recently identified a new credential phishing campaign that uses an email disguised as an Amazon e-gift card from the recipient’s employer. While the email appears to offer a substantial reward, its true purpose is to harvest Microsoft credentials from unsuspecting recipients. The combination of the large monetary value and the appearance of an email seemingly from their employer lures the recipient into a false sense of security that leaves them unaware of the dangers ahead. ... Once the recipient submits their email address, they will be redirected to a phishing page, as shown in Figure 3. The phishing page is well-disguised as a legitimate Microsoft login site, once again prompting the victim to input their credentials. Legitimate Microsoft Outlook login pages should be hosted on domains belonging to Microsoft (such as live.com or outlook.com), but as you can see in Figure 3, the domain for this site is officefilecenter[.]com, which was created less than a month before the time of analysis. Credential phishing emails such as these are a perfect example of the various ways that threat actors can exploit the emotions of the recipient. Whether it is the theme of phish, the content within, or the time of the year, threat actors will utilize anything they can to make sure you do not catch on until it’s too late. 


Driving Sustainability Forward with IIoT: Smarter Processes for a Greener Future

AI-driven IIoT systems are transforming how industries manage raw materials, inventory, and human resources. In smart factories, AI forecasts demand, streamlines production schedules, and optimizes supply chains to reduce waste and emissions. For instance, AI calculates the exact quantity of materials needed for production, preventing overstocking and minimizing excess. It also enhances SIOP and logistics by consolidating shipments and selecting eco-friendly transportation routes, reducing the carbon footprint of global supply chains. Predictive maintenance, powered by AI, contributes by detecting equipment issues early, preventing breakdowns, extending lifespan and uptime while reducing defective outputs. ... IIoT is a key enabler of the circular economy, which focuses on recycling, reusing, and reducing waste. Automated systems allow manufacturers to recycle heat, water, and materials within their facilities, creating closed-loop processes. For example, excess heat from industrial ovens can be captured and repurposed for heating water or other facility needs. While sensors monitor production processes to optimize material usage and reduce scrap, product take-back programs are another cornerstone of the circular economy. 

Daily Tech Digest - August 06, 2024

Why the Network Matters to Generative AI

Applications, today, are distributed. Our core research tells us more than half (60%) of organizations operate hybrid applications; that is, with components deployed in core, cloud, and edge locations. That makes the Internet their network, and the lifeline upon which they depend for speed and, ultimately, security. Furthermore, our focused research tells us that organizations are already multi-model, on average deploying 2.9 models. And where are those models going? Just over one-third (35%) are deploying in both public cloud and on-premises. Applications that use those models, of course, are being distributed in both environments. According to Red Hat, some of those models are being used to facilitate the modernization of legacy applications. ... One is likely tempted to ask why we need such a thing. The problem is we can’t affect the Internet. Not really. For all our attempts to use QoS to prioritize traffic and carefully select the right provider, who has all the right peering points, we can’t really do much about it. For one thing, over-the-Internet connectivity doesn’t typically reach into another environment, in which there are all kinds of network challenges like overlapping IP addresses, not to mention the difficulty in standardizing security policies and monitoring network activity.


Aware of what tech debt costs them, CIOs still can’t make it an IT priority

The trick for CIOs who have significant tech debt is to sell it to organization leadership, he says. One way to frame the need to address tech debt is to tie it to IT modernization. “You can’t modernize without addressing tech debt,” Saroff says. “Talk about digital transformation.” ... “You don’t just say, ‘We’ve got an old ERP system that is out of vendor support,’ because they’ll argue, ‘It still works; it’s worked fine for years,’” he says. “Instead, you have to say, ‘We need a new ERP system because you have this new customer intimacy program, and we’ll either have to spend millions of dollars doing weird integrations between multiple databases, or we could upgrade the ERP.’” ... “A lot of it gets into even modernization as you’re building new applications and new software,” he says. “Oftentimes, if you’re interfacing with older platforms that have sources of data that aren’t modernized, it can make those projects delayed or more complicated.” As organizational leaders push CIOs to launch AI projects, an overlooked area of tech debt is data management, adds Ricardo Madan, senior vice president for global technology services at IT consulting firm TEKsystems.


Is efficiency on your cloud architect’s radar?

Remember that we can certainly measure the efficiency of each of the architecture’s components, but that only tells you half of the story. A system may have anywhere from 10 to 1,000 components. Together, they create a converged architecture, which provides several advantages in measuring and ensuring efficiency. Converged architectures facilitate centralized management by combining computing, storage, and networking resources. ... With an integrated approach, converged architectures can dynamically distribute resources based on real-time demand. This reduces idle resources and enhances utilization, leading to better efficiency. Automation tools embedded within converged architectures help automate routine tasks such as scaling, provisioning, and load balancing. These tools can adjust resource allocation in real time, ensuring optimal performance without manual intervention. Advanced monitoring tools and analytics platforms built into converged architectures provide detailed insights into resource usage, cost patterns, and performance metrics. This enables continuous optimization and proactive management of cloud resources.


ITSM concerns when integrating new AI services

The key to establishing stringent access controls lies in feeding each LLM only the information that its users should consume. This approach eliminates the concept of a generalist LLM fed with all the company’s information, thereby ensuring that access to data is properly restricted and aligned with user roles and responsibilities. ... To maintain strict control over sensitive data while leveraging the benefits of AI, organizations should adopt a hybrid approach that combines AI-as-a-Service (AIaaS) with self-hosted models. For tasks involving confidential information, such as financial analysis and risk assessment, deploying self-hosted AI models ensures data security and control. Meanwhile, utilizing AIaaS providers like AWS for less sensitive tasks, such as predictive maintenance and routine IT support, allows organizations to benefit from the scalability and advanced features offered by cloud-based AI services. This hybrid strategy ensures that sensitive data remains secure within the organization’s infrastructure while taking advantage of the innovation and efficiency provided by AIaaS for other operations.


Fighting Back Against Multi-Staged Ransomware Attacks Crippling Businesses

Ransomware has evolved from lone wolf hackers operating from basements to complex organized crime syndicates that operate just like any other professional organization. Modern ransomware gangs employ engineers that develop the malware and platform; employ help desk staff to answer technical queries; employ analysts that identify target organizations; and ironically, employ PR pros for crisis management. The ransomware ecosystem also comprises multiple groups with specific roles. For example, one group (operators) builds and maintains the malware and rents out their infrastructure and expertise (a.k.a. ransomware-as-a-service). Initial access brokers specialize in breaking into organizations and selling the acquired access, data, and credentials. Ransomware affiliates execute the attack, compromise the victim, manage negotiations, and share a portion of their profits with the operators. Even state-sponsored attackers have joined the ransomware game due to its potential to cause wide-scale disruption and because it is very lucrative.


Optimizing Software Quality: Unit Testing and Automation

Any long-term project without proper test coverage is destined to be rewritten from scratch sooner or later. Unit testing is a must-have for the majority of projects, yet there are cases when one might omit this step. For example, you are creating a project for demonstrational purposes. The timeline is very tough. Your system is a combination of hardware and software, and at the beginning of the project, it's not entirely clear what the final product will look like. ... in automation testing the test cases are executed automatically. It happens much faster than manual testing and can be carried out even during nighttime as the whole process requires minimum human interference. This approach is an absolute game changer when you need to get quick feedback. However, as with any automation, it may need substantial time and financial resources during the initial setup stage. Even so, it is totally worth using it, as it will make the whole process more efficient and the code more reliable. The first step here is to understand if the project incorporates test automation. You need to ensure that the project has a robust test automation framework in place. 


In the age of gen AI upskilling, learn and let learn

Gen AI upskilling is neither a one-off endeavor nor a quick fix. The technology’s sophistication and ongoing evolution requires dedicated educational pathways powered by continuous learning opportunities and financial support. So, as leaders, we need to provide resources for employees to participate in learning opportunities (that is, workshops), attend third-party courses offered by groups like LinkedIn, or receive tuition reimbursements for upskilling opportunities found independently. We must also ensure that these resources are accessible to our entire employee base, regardless of the nature of an employee’s day-to-day role. From there, you can institutionalize mechanisms for documenting and sharing learnings. This includes building and popularizing communication avenues that motivate employees to share feedback, learn together and surface potential roadblocks. Encouraging a healthy dialogue around learning, and contributing to these conversations yourself, often leads to greater innovation across your organization. At my company, we tend to blend the learning and sharing together. 


Embracing Technology: Lessons Business Leaders Can Learn from Sports Organizations

To maintain their competitive edge, sports organizations are undertaking comprehensive digital transformations. Digital technologies are integrated across all facets of operations, transforming people, processes, and technology. Data analytics guide decisions in areas such as player recruitment, game strategies, and marketing efforts.  ... The convergence of sports and technology reveals new business opportunities. Sponsorships from technology companies showcase their capabilities to targeted audiences and open up new markets. Innovations in sports technology, such as advanced training equipment and analytical tools, are driving unprecedented possibilities. By embracing these insights, business leaders can unlock new avenues for growth and innovation in their own industries. Partnering with technology firms can lead to the development of new products, services, and market opportunities, ensuring sustained success and relevance in an ever-evolving business landscape.


Containerization Can Render Apps More Agile Painlessly

Application development and deployment methods will change because the app developer no longer has to think about the integration of an app with an underlying operating system and associated infrastructure. This is because the container already has the correct configuration of all these elements. If an app developer wants to immediately install his app in both Linux and Windows environments, he can do it. ... Most IT staff have found that they need specialized tools for container management, and that they can’t use the tools that they are accustomed to. Companies like Kubernetes, Dynatrace, and Docker all provide container management tools, but mastering these tools requires IT staff to be trained on the tools. Security and governance also present challenges in the container environment because each container uses its own operating system kernel. If an OS security vulnerability is discovered, the OS kernel images across all containers must be synchronously fixed with a security patch to resolve the vulnerability. In cases like this, it’s ideal to have a means of automating the fix process, but it might be necessary to do it manually at first.


Can AI even be open source? It's complicated

Clearly, we need to devise an open-source definition that fits AI programs to stop these faux-source efforts in their tracks. Unfortunately, that's easier said than done. While people constantly fuss over the finer details of what's open-source code and what isn't, the Open Source Initiative (OSI) has nailed down the definition, the Open Source Definition (OSD), for almost twenty years. The convergence of open source and AI is much more complicated. In fact, Joseph Jacks, founder of the Venture Capitalist (VC) business FOSS Capital, argued there is "no such thing as open-source AI" since "open source was invented explicitly for software source code." It's true. In addition, open-source's legal foundation is copyright law. As Jacks observed, "Neural Net Weights (NNWs) [which are essential in AI] are not software source code -- they are unreadable by humans, nor are they debuggable." As Stefano Maffulli, OSI executive director, has told me, software and data are mixed in AI, and existing open-source licenses are breaking down. Specifically, trouble emerges when all that data and code are merged in AI/ML artifacts -- such as datasets, models, and weights.



Quote for the day:

"Leadership does not depend on being right." -- Ivan Illich

Daily Tech Digest - December 21, 2023

The New HR Playbook: Catalyze Innovation With Analytics And AI

Metaverse and blockchain technologies — underpinned by data and AI — also offer a lot of possibilities for improving HR practices. The metaverse, a shared virtual space bridging physical and digital realities, offers avenues for remote workspaces and virtual collaboration. It can enhance recruitment, onboarding, training, and development processes by providing immersive and interactive experiences that engage candidates and employees on a new level. The metaverse could also help companies with decentralized teams cultivate a strong organizational culture by giving employees a shared virtual space for interaction and engagement. Blockchain technology offers transparency and security that can have profound implications for HR processes. HR departments can use blockchain to improve the security of record-keeping, verify employee credentials, and simplify benefits administration. Blockchain can also streamline payroll processes, especially for international employees. Companies can even use blockchain to create decentralized, employee-driven platforms for collaboration and communication.


Why 2024 will be the year of the CISO

As the ESG/ISSA research indicates, many fed-up CISOs will retire, while others will move on to become virtual CISOs (vCISOs) or take field CISO positions with security technology vendors. We'll read numerous stories next year about CISOs up and quitting on the spur of the moment. While the reasons won't be disclosed, you can bet they are among those cited above. Competition for qualified candidates will be fierce. On a side note, I don't believe there is a significant population of next-generation CISO candidates with the right experience to step up. In 2024, we will augment our general discussion of the global cybersecurity skills shortage with a specific addendum about the CISO shortage. CISO pay and compensation will rise precipitously. Aside from a handful of $1 million positions, CISOs aren't paid nearly as much as one might assume. Salary.com calculates a median salary of about $241,000 with 90% of CISOs making $302,000 or less. Given the job requirements (long hours, stress, being on-call, etc.), this isn't very much. With the competition for candidates, firms will greatly increase base pay, perks, and bonuses, leading to hyper CISO salary inflation.


Hot Jobs in AI/Data Science for 2024

“The new and highly specialized role known as the ‘LLM Engineer’ is primarily found within organizations that have reached an advanced stage in their AI journey, having conducted numerous experiments but now facing challenges in the operationalization of their AI models at scale,” says Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab. ... “Some of the most sought-after AI positions today include machine learning engineer, AI engineer, and AI architect,” says Shmuel Fink, Chair Master of Science in data analytics, Touro University Graduate School of Technology. “Nevertheless, several other AI roles are also gaining prominence, such as AI ethicist, AI product manager, AI researcher, computer vision engineer, robotics engineer, and AI safety engineer. Moreover, there are positions that require industry-specific expertise, like a healthcare AI engineer.” But back at the ranch, employees in any job role will become more valuable if they possess AI skills. As they gain those skills, some specialized job roles will evolve while others disappear.


How Blockchain Will Change Organizations

The fact that blockchain is a distributed database means it is very difficult to delete data. Once something has been recorded on the blockchain, it becomes part of the permanent record. This traceability of data is another key advantage of blockchain technology. The data stored on a blockchain is immutable, meaning that it cannot be changed or deleted. This traceability can be useful for tracking the provenance of goods and tracing the origins of data. It also has implications for compliance, as organizations will be able to show exactly what data they have and where it came from. ... Under the traditional centralized model, organizations have complete control over the data they store. However, individuals have full control over their data with blockchain technology. This is because each user has a private key, which is used to access their data. Individuals have complete control over their data, which is a key advantage of blockchain technology. It means that users can be sure that their data is safe and secure and that they can share it with whomever they choose.


Industry Impact: Celebrating IT's Milestones and Achievements This Year

The integration of AI into various solutions, including observability, IT service management, and database solutions, has allowed for greater automation of the mundane tasks that often bog down IT pros and hinder organizations from accelerating their digital transformations. AI-powered capabilities free up valuable time for IT pros, allowing them to focus on the most important tasks at hand. Autonomous operations, enabled by purpose-built models for IT operations and large language models, are poised to revolutionize IT environments in the coming years, reducing operation costs and bettering the lives of those in the tech workforce. ... The IT industry has a smorgasbord of accomplishments that have enriched the digital lives of organizations this year. The industry’s cloud migration journey, in particular, has played a central role in allowing organizations to scale their operations and pivot rapidly in response to market conditions. The cloud journey has transformed the way businesses operate, offering scalability, flexibility, and cost-efficiency. 


An IT Carol: How the Ghosts of IT Past and Present Can Help Improve the Future

You see yourself sitting at your desk, frantically trying to juggle more service desk tickets than you ever thought were possible. The trip to the future also shows the vast number of new complex systems that teams are using. As applications, networks, databases, and infrastructures grew in complexity, so did the tools and solutions we need to manage them. This has created a future where IT pros are trying to navigate and manage some of the most complex systems and environments imaginable. Teams are more overworked than ever before. You spend so much time fighting fires that you have no time to build better technology that provides important new capabilities. You have almost no time to think about anything else, let alone spend the holiday with family or friends. Thankfully, this is not a future that has to be, but rather one we can avoid if we take the right steps today. Right now, we are on the path to improving the lives of IT teams through the integration of artificial intelligence (AI). IT solutions powered by AI, such as observability and ITSM, can help manage the complex IT environments we are witnessing through ongoing digital transformation and the move to the cloud.


Why data, AI, and regulations top the threat list for 2024

Some of the essential questions security teams ought to be asking themselves include: How do we manage and safeguard aspects like confidentiality, integrity, and availability of data? What strategies can we employ to protect our data against cyber threats and misuse? How do we address the security challenges that emerge with expanding data repositories? How do we differentiate between valuable data and redundant information? Furthermore, there’s often a misalignment in how data is structured versus the business framework. Consequently, security teams may need to engage in discussions with business units to clarify issues such as how we are applying our data. With whom is this data being shared? ... Although AI technologies aren’t new, the recent widespread adoption of AI has introduced a myriad of business and security challenges for organizations. Key questions to consider include: How do we monitor AI usage within the organization? How do we regulate the data shared with AI systems by employees? How do we ensure ongoing compliance with ethical standards and legal requirements?


2023 - The year of transformation and harmonisation

Millennial leaders bring a distinctively dynamic, digitalised approach to their roles, characterised by agility, openness, proactiveness, and hands-on engagement. Their adeptness in navigating the digital landscape seamlessly allows them to forge strong connections within their predominantly Gen Z and millennial workforce. This workforce, in turn, embodies an informed, forward-looking, and tech-savvy ethos, driven by cutting-edge technologies that facilitate smart and efficient work practices. In the world of leading-edge technologies, the arrival of Chat GPT by OpenAI in the preceding November continued to take centre stage. Throughout the year, there was a surge in competition and discussions surrounding AI, particularly generative AI, which gained momentum. Amidst these discussions, Google's introduction of Bard added fervour to the debate, igniting intense conversations about the potential impact of generative AI on employment and the perceived threat to various job roles. This stirred a pot of mixed emotions—feelings of anxiety, uncertainty, and ambiguity swirled within the tech sphere.


Small businesses lead the way, while larger industries lag in tech adoption

On the other hand, many leaders in the small and mid-sized industrial sector are in the age group of 50 and above. When they initially embarked on their careers in the core industry, the adoption of IT and technology in their companies was significantly lower. Technology was not as pervasive, and IT integration was often considered an unnecessary expense. For those who did attempt computerisation in the early 2000s, the experience was often disheartening. Small IT companies that provided software solutions during that period often faced challenges and many even disappeared. The owners of these companies, faced with the uncertainty and challenges of running a technology-based business, opted for well-paying jobs instead. This experience left a lasting impact on their perception of technology and its role in business operations. Moreover, the proliferation of the internet and the rise of startups introduced a new paradigm. Many services and software were offered for free or at significantly reduced rates, fostering an expectation of inexpensive or cost-free technology solutions. This demotivated many software company owners from continuing in the business. 


What’s Ahead for AI In 2024: The Transformative Journey Continues

The coming year will see a shift in how generative AI is employed by businesses, with a greater emphasis on using organizational data. Companies are increasingly cautious about sharing sensitive data on public platforms, opting instead to host private foundation models within their four walls. This move is driven by concerns over data security and the desire to customize AI applications to specific organizational needs. By using their own data, companies can ensure that AI output is relevant and in context. This trend will lead to innovative applications of generative AI in a variety of business functions. ... New tuning techniques such as prompt tuning and retrieval augmented generation (RAG) will gain popularity next year. These methods provide more context-specific adjustments to AI models without the need for extensive retraining. Prompt tuning, for example, uses smaller pre-trained models to encode text prompts; RAG combines specific information with prompts to enhance the relevance of the model's output.



Quote for the day:

"People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - December 19, 2023

7 Security Trends to Watch Heading into 2024

Cyberattacks led by nation-state threat actors, as well as politically motivated hacktivist groups, will continue in relation to the active conflicts in Ukraine and Gaza. Vanderlee points out that attacks in these regions may have a higher likelihood of kinetic impact. For example, Sandworm, a threat actor linked with Russia, disrupted the power in Ukraine and caused a power outage in late 2022. “Those are definitely things to watch out for, particularly if you do business in those regions or in countries situated around those regions,” says Vanderlee. ... Cloud migration continues to be a significant theme in the IT space. As more organizations embrace a cloud-first approach, threat actors are looking for ways to target hybrid and multi-cloud environments. Mandiant observed threat actors targeting cloud environments and seeking ways to gain persistence and move laterally in 2023, according to Google Cloud’s Cybersecurity Forecast 2024. That trend is likely to bleed over into 2024; threat actors are going to look for ways to exploit cloud misconfigurations and move laterally across multi-cloud environments.


Internet's deep-level architects slam US, UK, Europe for pushing device-side scanning

Client-side scanning has since reappeared, this time on legislative agendas. And the IAB – a research committee for the Internet Engineering Task Force (IETF), a crucial group of techies who help keep the 'net glued together –thinks that's a bad idea. "A secure, resilient, and interoperable internet benefits the public interest and supports human rights to privacy and freedom of opinion and expression," the IAB declared in a statement just before the weekend. "This is endangered by technologies, such as recent proposals for client-side scanning, that mandate unrestricted access to private content and therefore undermine end-to-end encryption and bear the risk to become a widespread facilitator of surveillance and censorship." ... For the IAB and IETF, client-side scanning initiatives echo other problematic technology proposals – including wiretaps, cryptographic backdoors, and pervasive monitoring. "The IAB opposes technologies that foster surveillance as they weaken the user's expectations of private communication which decreases the trust in the internet as the core communication platform of today's society," the organization wrote. 


Zombie Scrum First Aid

Zombie Scrum is on the rise! What may look like Scrum from a distance often turns out to be anything, but when you take a closer look. Although teams go through the motions of Scrum, Sprints don’t result in valuable outcomes, customers are not involved, teams have little autonomy, and nobody is doing anything to improve. The first response to Zombie Scrum might be to panic, run around, and hide below your desk. That doesn’t usually work. So, for our book, the Zombie Scrum Survival Guide, we created a simple poster that tells you exactly what to do in clear and simple language. ... Complaints, cynicism, and sarcasm don’t help anyone. It may even contribute to teams sliding further into Zombie Scrum. Instead, highlight what works well, where improvements occur, and what is possible when you work together. Use humor to lighten the mood, but not sugarcoat the truth. Facilitate the next Sprint Retrospective with the Liberating Structure ‘Appreciative Interviews’. It helps identify enablers for success in less than one hour. By starting from what goes well — instead of what doesn’t — AI liberates spontaneous momentum and insights for positive change as “hidden” success stories are uncovered. 


On-prem vs cloud storage: Four decisions about data location

For the best performance, system architects need to minimise latency between applications and storage. To access cloud storage via the public internet inevitably increases latency. Internet connections are also more prone to variable performance and general reliability issues. This suggests that for best performance, data should be stored on-premise. For the most critical applications, this is still usually the case. But the decision is not always clear cut. “We know that if you start to run compute on a storage bucket across the wire, you are going to have a performance impact,” cautions Paul Mackay, regional vice-president for EMEA and APAC at cloud data firm Cloudera. ... Even so, optimised on-premise storage can still be the cheaper option. As PA’s Gupta points out, much depends on how new the customer’s on-site infrastructure is, and how much life it has left. Cloud storage also has hidden costs. Data egress is frequently cited as a reason for higher than expected bills, but firms can also find they pay more than expected because they store data for extended periods in expensive tiers rather than dedicated cloud archives. Again, careful application design and a clear picture of data use will minimise this.


Parallels Between Open Source and Fully Remote Team Setups

In open source and remote work, digital communication is the vital link uniting individuals, fostering collaboration and understanding. Beyond information transfer, it builds relationships and transcends cultural differences. Contributors in open source projects require effective digital communication for diverse backgrounds. Platforms like GitHub offer not just code repositories but crucial discussion spaces. Remote work tools like Slack and Zoom create a virtual office, addressing the challenge of sustaining connections. Clarity counters miscommunication, while video meetings provide a personal touch, supporting empathetic communication. Inclusive digital communication ensures accessibility, involving all contributors. ... Open source communities epitomize meritocracies, fostering diversity and innovation by evaluating contributions solely on merit. In remote work, meritocracy shifts the emphasis from productivity to quality and impact, allowing introverted individuals to shine based on tangible outputs, fostering an objective assessment. While offering advantages, challenges include potential “echo chamber” effects and the risk of overlooking diverse contributions. 


The impact of prompt injection in LLM agents

Addressing prompt injection in LLMs presents a distinct set of challenges compared to traditional vulnerabilities like SQL injections. In these types of scenarios, the structured nature of the language allows for parsing and interpretation into a syntax tree, making it possible to differentiate between the core query (code) and user-provided data, and enabling solutions like parameterized queries to handle user input safely. In contrast, LLMs operate on natural language, where everything is essentially user input with no parsing into syntax trees or clear separation of instructions from data. This absence of a structured format makes LLMs inherently susceptible to injection, as they cannot easily discern between legitimate prompts and malicious inputs. Any defensive and mitigation strategies should be created with the assumption that attackers will eventually be able to inject prompts successfully. Firstly, enforcing stringent privilege controls ensures LLMs can access only the essentials, minimizing potential breach points. We should also incorporate human oversight for critical operations to add a layer of validation to safeguard against unintended LLM actions


Navigating cloud concentration and AI lock-in

Although you can choose to reduce the use of a specific cloud provider, it is sometimes nearly impossible to move some applications to other platforms. This is due to the coupling of those applications to the cloud platform and the economic inability to get them off those platforms. To guard against the risks associated with cloud concentration and AI lock-in, IT leaders are exploring strategies to reduce dependency on a single cloud provider. This can include leveraging single-tenant cloud solutions, colocation companies, and hybrid cloud strategies to diversify their cloud deployment and infrastructure. As IT leaders navigate the complex landscape of cloud concentration risks and AI lock-in, it is evident that an agile approach to cloud strategy and AI adoption is mandatory. Organizations can mitigate risks by understanding the nuanced considerations of vendor selection, fostering a multicloud approach, and embracing innovative technologies. At the end of the day, keep your eyes open for the fully optimized solution, and do not focus on just a single cloud provider’s services, including AI.


Will Putting a Dollar Value on Vulnerabilities Help Prioritize Them?

Whether the focus on impact makes VISS any more valuable than other scoring systems is a matter of debate. Any scoring systems should not just replicate what others are already doing, and VISS seems to try to cover some new ground — at least in terms of scope, says Brian Martin, vulnerability historian at Flashpoint, a threat intelligence firm. "Do we need another scoring system? No, but kind of yes," he says. "On one hand, we have too many SSes. We have CVSS version 2, version 3, version 4, we have EPSS, we have the ransomware prediction scoring system — So I'm skeptical, but if it is more direct and to be utilized for a single purpose, such as bug bounties, then I can see it being beneficial." However, companies should not expect prioritizing vulnerabilities using VISS to be any easier than it is with other systems. While VISS may be simpler to calculate, it still requires knowledgeable answers to assign the right level of risk to vulnerabilities, says Tim Jarrett ... "Scoring models are not are not silver bullets," he says. "You actually have to adopt them and use them and feed them. And I think that what this does not do is make the problem of prioritizing vulnerabilities any less labor intensive."


9 tips for achieving IT service delivery excellence

To achieve maximum efficiency, Cziomer also suggests focusing service efforts on DevOps Research and Assessment (DORA) metrics, such as “lead time for change” and “time to restore service.” Customer-centric Net Promoter Scores are equally important, he adds. “To dive deeper into understanding our services, I employ methods like value stream mapping to pinpoint bottlenecks or inefficiencies,” says Cziomer, who feels that proactive approaches such as these enable IT organizations to consistently elevate their service levels. ... Effective IT service delivery begins by creating and standardizing processes and documentation, says Patrick Cannon, field CTO at data center and cloud services firm US Signal. Standardization ensures a consistent end-user experience with outcomes that adhere to established security policies. “It’s also beneficial for effective training and new IT staff onboarding,” he says, adding that when IT understands the needs of each business unit, it opens the way to a more proactive service approach, reducing downtime and fostering innovation.


Architecting for Resilience: Strategies for Fault-Tolerant Systems

A fault-tolerant system can keep working properly even when things go wrong. Faults are any issues that make a system behave differently than expected. Faults can be caused by hardware failure, software bugs, human errors, or environmental factors like power outages. And in complex systems with a lot of services and sub-services, hundreds of servers, and distributed in different Data Centers minor issues happen all the time. ... Testing plays a key role in building resilient, fault-tolerant systems. Testing helps identify and address potential weaknesses before they cause real failures or outages. There are various testing methods focused on resilience, including chaos engineering, stress testing, and load testing. These techniques simulate realistic failure scenarios like hardware crashes, traffic spikes, or database overloads. The goal is to observe how the system responds and find ways to improve fault tolerance. Testing validates whether redundancy, failover, replication, and other strategies work as intended. All big IT companies practice resilience testing. And Netflix is leading here. 



Quote for the day:

"Perhaps the ultimate test of a leader is not what you are able to do in the here and now - but instead what continues to grow long after you're gone" -- Tom Rath

Daily Tech Digest - September 05, 2023

GenAI in productivity apps: What could possibly go wrong?

The first and most obvious risk is the accuracy issue. Generative AI is designed to generate content — text, images, video, audio, computer code, and so on — based on patterns in the data it’s been trained on. Its ability to provide answers to legal, medical, and technical questions is a bonus. And in fact, often the AIs are accurate. The latest releases of some popular genAI chatbots have passed bar exams and medical licensing tests. But this can give some users a false sense of security, as when a couple of lawyers got in trouble by relying on ChatGPT to find relevant case law — only to discover that it had invented the cases it cited. That’s because generative AIs are not search engines, nor are they calculators. They don’t always give the right answer, and they don’t give the same answer every time. For generating code, for example, large language models can have extremely high error rates, said Andy Thurai, an analyst at Constellation Research. “LLMs can have rates as high as 50% of code that is useless, wrong, vulnerable, insecure, and can be exploited by hackers,” he said. 


CFOs and IT Spending: Best Practices for Cost-Cutting

Auvik Networks’ Feller stressed it is important for CFOs not to come in and start slashing everything. “There was a reason why IT applications and services were purchased in the first place and, in today’s corporate environment, many of these systems are integrated with each other and into employees’ work processes,” he says. “CIOs should have a good idea of what’s critical and sensitive.” He says the way he tends to approach this is by working with the CIO to identify the applications that are main “sources of truth” for key corporate data. These tend to be the financial and accounting systems or enterprise resource planning (ERP), customer relationship management (CRM), human resources information system (HRIS), and often a business intelligence (BI) system. “For each of those key systems, we evaluate whether they are still the right choice for where the company has evolved and will they scale as the company grows,” he says. “Replacing one or more of those systems can be a big, complicated project but is often essential to a company’s success.”


Hackers Adding More Capabilities to Open Source Malware

Researchers observed that the malware samples are currently being used by multiple threat actors and various variants of this threat are already in the wild with threat actors improving its efficiency and effectiveness over time. The malware is capable of stealing sensitive information from infected systems including host information, screenshots, cached browser credentials and files stored on the system that match a predefined list of file extensions. It also attempts to determine the presence of credential databases for browser applications includin Chrome, Yandex, Edge and Opera. Once executed, the malware creates a working directory, and a file grabber executes and attempts to locate any files stored within the victim's Desktop folder that match a list of file extensions including .txt, .pdf, .doc, .docx, .xml, .img, .jpg and .png. The malware then creates a compressed archive called log.zip containing all of the logs and the data is transmitted to the attacker via Simple Mail Transfer Protocol "using credentials defined in the portion of code responsible for crafting and sending the message."


Connected cars and cybercrime: A primer

Connected car cybercrime is still in its infancy, but criminal organizations in some nations are beginning to recognize the opportunity to exploit vehicle connectivity. Surveying today’s underground message forums quickly reveals that the pieces could quickly fall into place for more sophisticated automotive cyberattacks in the years ahead. Discussions on underground crime forums around data that could be leaked and needed/available software tools to enable attacks are already intensifying. A post from a publicly searchable auto-modders forum about a vehicle’s multi-displacement system (MDS) for adjusting engine performance, is symbolic of the current activity and possibilities. Another, in which a user on a criminal underground forum offers a data dump from car manufacturer, points to the possible threats that likely are coming to the industry. Though they still seem to be limited to accessing regular stolen data, compromises and network accesses are for sale in the underground.


Identify Generative AI’s Inherent Risks to Protect Your Business

Generative AI models have basically three attack surfaces: the architecture of the model itself, the data it was trained on, and the data fed into it by end users. For example, adversarial attacks and data poisoning depend on the model’s training data having a security flaw and thus being open to manipulation and infiltration. This allows threat actors to inject incorrect or misleading information into the training data, which the model uses to generate responses, leading to inaccurate information presented as accurate by a trusted model and, subsequently, flawed decision-making. Model extraction attacks depend on the skill of the hacker to compromise the model itself. The threat actor queries the model to gain information about its structure and, therefore, determine the actions it executes and what its targets are. One goal of this sort of attack could be reverse-engineering the model’s training data, for instance, private customer data, or recreating the model itself for nefarious purposes. Notably, any of these attacks can take place before or after the model is installed at a user site. 


How attackers exploit QR codes and how to mitigate the risk

A common attack involves placing a malicious QR code in public, sometimes covering up a legitimate QR code, and when unsuspecting users scan the code they are sent to a malicious web page that could host an exploit kit, Sherman says. This can lead to further device compromise or possibly a spoofed login page to steal user credentials."This form of phishing is the most common form of QR exploitation," Sherman says. QR code exploitation that leads to credential theft, device compromise or data theft, and malicious surveillance are the top concerns to both enterprises and consumers, he says. If QR codes lead to payment sites, then users might divulge their passwords and other personal information that could fall into the wrong hands. "Many websites do drive-by download, so mere presence on the site can start malicious software download," says Rahul Telang, professor of information systems at Carnegie Mellon University’s Heinz College. 


The ‘IT Business Office’: Doing IT’s admin work right

Each IT manager has a budget to manage to. Sadly, in most companies budgeting looks more like a game of pin-the-tail-on-the-donkey than a well defined and consistent algorithm. In principle, a lot of IT staffing can be derived from a parameter-driven model. This can be hard to reconcile with Accounting’s requirements for budget development. With an IT Business Office to manage the relationship with Accounting, IT can explain its methods once, instead of manager-by-manager-by-manager. ... Business-wide, new-employee onboarding should be coordinated by HR, but more often each piece of the onboarding puzzle is left to the department responsible for that piece. An IT Business Office can’t and shouldn’t try to fix this often-broken process throughout the enterprise. But onboarding new IT employees is, if anything, even more complicated than onboarding anyone else’s employees. An IT Business Office can, if nothing else, smooth things out for newly hired IT professionals so they can start to work the day they show up for work.


MSSQL Databases Under Fire From FreeWorld Ransomware

According to an investigation by Securonix, the typical attack sequence observed for this campaign begins with brute forcing access into the exposed MSSQL databases. After initial infiltration, the attackers expand their foothold within the target system and use MSSQL as a beachhead to launch several different payloads, including remote-access Trojans (RATs) and a new Mimic ransomware variant called "FreeWorld," named for the inclusion of the word "FreeWorld" in the binary file names, a ransom instruction file named FreeWorld-Contact.txt, and the ransomware extension, which is ".FreeWorldEncryption." The attackers also establish a remote SMB share to mount a directory housing their tools, which include a Cobalt Strike command-and-control agent (srv.exe) and AnyDesk; and, they deploy a network port scanner and Mimikatz, for credential dumping and to move laterally within the network. And finally, the threat actors also carried out configuration changes, from user creation and modification to registry changes, to impair defenses.


Managing Data as a Product: What, Why, How

Applying product management principles to data includes attempting to address the needs of as many different potential consumers as possible. This requires developing an understanding of the consumer base. The consumers are typically in-house staff accessing the organization’s data. (The data is not being “sold,” but is being treated as a product available for distribution, by identifying the consumers’/in-house staff’s needs.) From a big-picture perspective, the business’s goal is to maximize the use of its in-house data. Managing data as a product requires applying the appropriate product management principles. ... The data as a product philosophy is an important feature of the data mesh model. Data mesh is a decentralized form of data architecture. It is controlled by different departments or offices – marketing, sales, customer service – rather than a single location. Historically, a data engineering team would perform the research and analytics, a process that severely limited research when compared to the self-service approach promoted by the data as a product philosophy, and the data mesh model.


Enterprise Architecture Must Look Beyond Venturing the Gap Between Business and IT

The architects should not be the ones managing and maintaining the repository by themselves. They should facilitate the rest of the organization to make sure that they can ask for a repository. Architecture needs to become part of every strategic and tactical role in your organization. I think EA is basically following the path that so many other industries and disciplines have followed already. It’s the path of democratization. Today, we all have our supercomputer in our pocket, meaning that we have more functionality than ever before. And we don’t even have to go to machine rule, we don’t even have to go to our desk anymore, we can just take it out of our pocket, and help us to make the right decisions of where we want to go, how we’re going to send an email, which decision we’re kind of making. This self-service way of doing that has really enabled organizations to be much more efficient, much more transparent, much more effective. And I think this is what we want to achieve with EA, as well.



Quote for the day:

“Just because you’re a beginner doesn’t mean you can’t have strength.” -- Claudio Toyama