Daily Tech Digest - December 25, 2023

Technical Debt is Killing Your Business: How a PLM Strategy Helps

Many organizations implicitly tolerate technical debt as a necessary investment to adapt to changing circumstances or swiftly seizing new opportunities. Successful businesses stress the importance of managing technical debt through acceptance, measurement and proactive strategies, including the adoption of open standards, abstraction and incremental changes. ... Defining and adopting an effective PLM strategy is instrumental in managing technical debt comprehensively. A 2020 McKinsey study titled “Tech Debt: Reclaiming Tech Equity” highlighted the importance of strategic alignment, stating that, “A degree of technical debt is an unavoidable cost of doing business, and it needs to be managed appropriately to ensure an organization’s long-term viability." Furthermore, the study emphasized that “the goal is not to reach zero technical debt. That would involve devoting all resources to remediation rather than building points of competitive differentiation. It would also make it difficult to expedite IT development when strategic or risk considerations require it. Rather, companies should work to size, value and control their technical debt and regularly communicate it to the business.”


Improving the case for waste from data centers

The challenge originally stems from the practical complexities of collecting and harnessing residual heat from data centers. Planning authorities actively encourage heat reclamation, but the lack of existing infrastructure poses a significant obstacle. While planning conditions that mandate developers to allow for connections to ‘future’ heating networks is a positive move, this becomes futile where there is no corresponding plan for heat network development. Developers comply with the condition out of an obligation to meet regulatory requirements rather than in genuine expectation of the infrastructure ever being used. From the perspective of data center operators, investing in the infrastructure only makes sense when it generates Operational Expenditure (OpEx) savings through the reduced power and water consumption. However, the misalignment in load profiles complicates this matter. As the heating network’s demands peak in winter whilst reducing in summer, the data center operates the opposite way, as it can take advantage of ‘free cooling’ during the colder months. This misalignment in load profiles also impacts the ESCos.


The rise of observability and why it matters to your business

Automation is a two-edged sword. It’s one of those alluring concepts, but there’s real caution around trusting machines to judge what actions should and shouldn’t be taken and when. So given the sensitive nature of change management, we would expect this trend to continue to lean toward AI-led automation, but it will take some time before humans are mostly out of the loop. Moreover, while many vendors claim to have AI, there’s a wide spectrum of capabilities, and customers should be very cautious about vendor claims in this regard. Now, not surprisingly, the regulated industries of financial services, healthcare and government see a much lower tendency to be mostly-AI led in this context over the next year (well under 5% say mostly AI-led in this chart), whereas industries such as energy and high tech are much more likely to adopt AI aggressively in this space. Interestingly, the data show that senior managements are more likely to push for AI adoption whereas the practitioners, who literally have their jobs on the line – that is, machines replacing humans or getting fired for implementing rogue automation – are much less optimistic.


Innovate to elevate: Blueprint for business excellence in 2024 and beyond

The upcoming year promises an exciting development in the form of GenAI, which will be integrated into everyday applications such as search engines, office software, design tools, and communication platforms. This integration will reveal its full potential as a super-smart hyper-automation engine. With the ability to take over routine tasks, including information retrieval, scheduling, compliance management, and project organization, individuals will be able to boost their productivity and efficiency. As per a report, hyper-automation, combined with other technologies, can automate work activities that currently occupy 60-70% of employees’ time by 2024. This development offers immense value to sectors such as software engineering, R&D, customer operations, marketing, and sales, making it an indispensable part of the IT industry. In this rapidly evolving world, organizations are constantly searching for ways to enhance customer service and drive growth. One of the most promising ways to achieve this is by embracing hyper-automation technologies such as AI-powered tools, Natural Language Processing (NLP), chatbots, and virtual assistants. 


4 ways robotics, AI will transform industry in 2024

The future of manufacturing is intricately linked to IT/OT integration as data will underpin innovation and efficiency. Research shows that the manufacturing industry has been at the forefront of adopting cloud-based software services and we are already seeing some customers use these to enhance quality, cost efficiency, and predictability. That makes me confident that 2024 will see the growth of data-driven logistics and manufacturing systems. Many still have an outdated view of the cloud as merely being a data collector and backup function, as we know it from our private lives. But the real potential and power don’t lie in storing data or even in linking machines. The real transformative leap comes when cloud-based software services connect humans and machines and help manufacturers simplify complex processes and make smarter decisions. The benefits of this digital evolution are significant. Remote access to manufacturing data enables quick responses to issues and continuous automation improvement. With dynamic systems now essential, trusted cloud technologies offer the latest in security and state-of-the-art services.


Proper Data Management Drives Business Success

Organizations across industries are excited about generative artificial intelligence (AI) and large language models (LLMs), and for good reason. Tools like Chat GPT-4 have the potential to transform business and revolutionize how employees do their jobs, so it’s no surprise that many people are enthusiastic about implementing them within their organizations. However, LLMs are only as good as the data on which they are trained. If an organization’s data isn’t properly sorted, tagged, and secured, the addition of LLMs will not be nearly as transformative as business leaders hope. Nearly half (45%) of IT leaders admitted that ineffective and inefficient Data Management means they can’t leverage emerging technology such as generative AI, which can put them at a competitive disadvantage. IT leaders must holistically assess the state of their data practices before implementing generative AI. Only 13% of respondents reported that Data Management initiatives are their number one priority, so it’s unsurprising that 77% of the average U.S. company’s data is redundant, obsolete, or trivial (ROT) or dark data. 


Understanding the NSA’s latest guidance on managing OSS and SBOMs

In an effort to provide context and prioritization to downstream product and software consumers, the guidance recommends suppliers and developers adopt Vulnerability Exploitability eXchange (VEX) documents to help consumers and customers know which components are actually impacted by a vulnerability, which have been resolved, and what should potentially be addressed via compensating controls. The NSA also recommends suppliers and vendors adopt attestation processes to demonstrate the secure development of a product throughout the building, scanning, and packaging of product development and distribution. This of course is being led by industry efforts such as in-toto and SSDF and self-attestations when machine-readable artifacts are not generated and used. This helps provide assurance of not just the components of an end product but the security of the development process as well. To address vulnerabilities the NSA recommends using not just CVE and NVD but also other vulnerability databases such as OSV as well as vulnerability intelligence sources such as the CISA Known Exploited Vulnerability (KEV) catalog and Exploit Prediction Scoring System (EPSS).


5 Common Data Science Challenges and Effective Solutions

The upskilling and reskilling of existing data science experts aren’t limited to technical skills. Data science experts also need enhanced problem-solving and communication skills. With the massive amount of data now available come new challenges and problems that need to be addressed. The solutions to these problems need to be properly communicated to team members and management, who may or may not have the expertise to interpret data on their own. We’ll explore this in more detail later. To address the challenge of a smaller pool of data scientists relative to demand, you just need to stand out as a potential employer and attract some of those professionals who are part of that pool. So, offer competitive salaries and benefits. The average base pay for data scientists in the US is $146,422, according to Glassdoor, and if you can offer more, better. Whether you hire data scientists or already have data professionals as employees, you need to invest in data science workshops and training. These can help ensure your team’s data science skills are attuned to the times and consider current practices and standards in the data science industry.


How Observability Strengthens Your Company Culture

Observability breaks down silos and makes collaboration easier across different clouds, databases, and dashboards seamlessly. For example, an issue that the DevOps team discovers through observability might lead them to collaborate with the design team in a way they may never have before. Leaders should aim to do the same for their teams by fostering greater collaboration across the entire organization. A lack of effective collaboration and communication is the top cause of workplace failures, according to 86 percent of employees and executives. Just as observability is a step up from monitoring, collaboration is the output that evolves from transparent communication. Your head of accounting probably knows precisely where each decimal point needs to be within a spreadsheet and why it needs to be there. Can they say the same about the IT team’s technology stack or the sales team’s go-to-market plan? With a culture underpinned by collaboration, employees won’t just learn how to get along. They’ll understand why each cog in your machine functions the way it does, as well as the effect of their work on their fellow employees, the end product, and the business as a whole.


The Third-Party Threat for Financial Organisations

DORA requires financial entities to have robust contracts in place with ICT service providers. Financial organisations must also maintain a register of service providers and report on this to the competent authority every year. The key here is to manage risks. This includes managing the risk of having too many critical or important functions supported by a small number of service providers. In addition, DORA requires that financial entities only contract with providers that “comply with appropriate information security standards”. Where the ICT service provider supports critical or important functions, the financial entity must ensure the standards are “the most up-to-date and highest quality”. ... Unlike the GDPR (General Data Protection Regulation), DORA does not require that these standards be identified by a specific authority, so it’s reasonable to assume that ISO 27001 – since it sets the international benchmark for information security management – would qualify as such a standard. As Alan mentioned, certifications like ISO 22301 and Europrivacy™/® add further assurance, as do due diligence checks on suppliers’ resilience, particularly for critical suppliers.



Quote for the day:

"Innovation is taking two things that already exist and putting them together in a new way." -- Tom Freston

Daily Tech Digest - December 24, 2023

The emerging role of the chief resilience officer in BCDR

Chief resilience officer is a relatively new senior-level executive title and is still evolving. Responsibilities can include business continuity and disaster recovery (BCDR), incident response, cybersecurity, and risk management. The chief resilience officer might also be designated as the lead executive for crisis management activities. Chief resilience officers must ensure the organization can adapt and improve its operations so that future disruptive events are more effectively mitigated, resulting in minimal damage to the organization and its reputation. ... Preparing for and responding to disruptive events traditionally has been managed by a wide variety of job titles in an organization. Sometimes the role is part of the IT staff or disaster recovery team. Other times it can be part of administration, risk management, emergency management, human resources or facilities management. In medium to large organizations, the need for a central leadership role for these and related activities has become evident. ... Establishing a chief resilience officer reinforces the importance of BCDR activities across the entire organization.


Global securities body releases DeFi recommendations: Finance Redefined

Following its release, some community members worried about how it could “kill” DeFi, while others said it would not have a fatal effect. Apart from IOSCO’s move, China’s central bank also urged jurisdictions across the globe to regulate the DeFi space jointly. Meanwhile, the DeFi ecosystem flourished in the past week thanks to ongoing bullish market momentum, with most tokens trading in green on the weekly charts. IOSCO published nine recommendations for DeFi. The organization encourages consistency when it comes to regulatory oversight across jurisdictions worldwide. The new recommendations were a companion to the digital asset and crypto recommendations released in November. Furthermore, IOSCO released a note on how the two sets of recommendations can work hand in hand depending on the level of decentralization of regulated entities. ... Apart from IOSCO, the People’s Bank of China (PBoC) also pushed for joint DeFi regulation in its latest financial stability report. The central bank allotted a section to crypto assets in its report, underscoring the need for the industry to be regulated with joint efforts from various jurisdictions.


Unleashing power of language models in India’s IT landscape: The talking network revolution

The cornerstone of this transformative paradigm is the ability of language models to comprehend, analyze, and respond to user queries with human-like understanding. India, with its vast and diverse linguistic landscape, stands to benefit immensely from language models that can comprehend and respond in multiple languages. This linguistic versatility ensures that the Talking Network caters to the linguistic diversity of the Indian corporate environment, making it an inclusive and accessible solution for businesses across the country. One of the pivotal trends catalyzed by GAI and LLMs in India is the development of proactive and predictive IT maintenance tools. AIM Research predicts that, by 2024, 40 per cent of enterprise applications will embed conversational AI as a standard feature. Traditionally, IT maintenance has been a reactive process, addressing issues only when they arise. However, the Talking Network introduces a proactive dimension by leveraging predictive analytics and machine learning capabilities embedded in these language models. By analyzing historical data and identifying patterns, the network can foresee potential glitches and address them before they escalate into major disruptions.


Why Bill Gates Says AI Will Supercharge Medical Innovations

He cites an AI-powered tool under development at the Aurum Institute in Ghana that helps health workers prescribe antibiotics without contributing to antimicrobial resistance, where pathogens learn how to get past antibiotic defenses. The tool can comb through all the available information about antimicrobial resistance and suggest the best drug plan for a patient. ... Gates also sees AI assisting in education, calling AI education tools "mindblowing," as they are tailored to individual learners, and says they will "only get better." He's excited about how the technology can be localized to students in many different countries and cultural contexts. Not everything on Gates' mind is AI-related. He's concerned about climate change, saying he's "blown away by the passion from young climate activists," and hopeful that 2024 will see more investment in innovations that will help those who are most affected by the climate crisis. And he even plunges into the debate over nuclear energy. Gates notes that high-profile disasters such as Chernobyl in the 1980s and Three Mile Island in the late 1970s have spotlighted the risks, but over the past year, he's seen a shift towards acceptance.


How AI Is Transforming Industries

A critical sector of the Indian economy, agriculture contributes to 18 per cent of the GDP. And several new-age start-ups are emerging in this segment with the likes of CropIn, DeHaat, BharatAgri, and Bijak. These start-ups are helping develop innovative solutions for various aspects of agriculture, especially precision farming, supply chain management, and market linkages The 2019 start-up Fyllo presently has over 100 agronomy models to help farmers produce over 20 crops. It provides insights on growing crops based on climate or occurrences of diseases/pests on crops. Fyllo believes that both problems can be solved with accurate data, which led to them building a number and pathogens prediction models using AI now. "We use AI for multiple use cases at Fyllo. The first use case is predicting the weather. We use climate data from our devices and machine learning-based weather Models to come up with a highly precise farm-level weather prediction model. Another use case is getting crop health, and crop stage identification from satellite imagery. We use various machine learning models to do that. 


Magnetic Knots Push Future Computing Toward 3D

“In the last decades, electronics basically developed in the paradigm of two-dimensional systems,” says Nikolai Kiselev, a staff scientist at the Peter Grünberg Institute in Jülich, Germany. “Which from a certain point of view is absolutely reasonable because technologically it’s much easier to fabricate and maintain such devices. But if we look toward the future, most probably to make our devices the most efficient, at some point, we will have to turn towards a three-dimensional architecture. And that’s where the discovery we made in our paper might become useful. ... Although hopfions move around readily, other aspects of their computing potential is still uncertain. The team used transmission electron microscopy to image the hopfion, and measuring its location more efficiently is an outstanding problem. The team says they plan to look at how these objects respond to electric current, which could help detect and track them. Plus, precise details on the exact ways hopfions might encode information is still an open question. That said, Kiselev adds, many questions like this don’t yet have answers because there has been no reason to ask them. 


The Art Of Listening: Silent Communication In Leadership

Silence, first and foremost, is a medium of introspection and reflection. Leaders, constantly barraged by information and demands, may find themselves lost in a maze of noise—both external and internal. Silence offers a sanctuary, a space to step back and reflect. It allows leaders to process information, contemplate decisions, and align their actions with their core values and objectives. This introspective silence is not merely an absence of noise; it’s an active engagement with one’s thoughts, a deliberate pause to understand the bigger picture. Moreover, silence can be a powerful communication tool. It’s not just about the absence of speech; it’s about listening, understanding, and absorbing. ... Silence also plays a crucial role in conflict resolution and negotiation. In tense situations, a leader’s silence can de-escalate emotions and give everyone a moment to breathe and reassess. By not immediately responding to a provocation or a challenging statement, leaders can avoid knee-jerk reactions that might exacerbate the conflict. Instead, silence can be used to control the tempo of the conversation, allowing for thoughtful and measured responses that are more likely to lead to constructive outcomes.


2024 in laptops: it’s shaping up to be a big year for Windows

It’s the AI coprocessor inside that’s intriguing to me, particularly because Intel and Microsoft have both been dropping hints about a future version of Windows arriving soon and how “AI is going to reinvent how you do everything on Windows.” Rumors suggest that Windows 12 will include a large focus on AI and take advantage of the AI coprocessors that Intel is building into its Core Ultra chips. (Intel isn’t the only one: AMD also has its own Ryzen 7000 mobile processors that include a dedicated AI engine, and these types of neural processing unit (NPU) chips are common on Arm-powered Windows laptops.) Intel held an AI event to launch its Core Ultra chips this month, just ahead of the annual Consumer Electronics Show (CES), where we’ll see all of the new laptops that are powered by Intel’s new chips. Lenovo, MSI, Acer, and Asus are all launching laptops with these new chips inside. While Intel talked a lot about “AI everywhere,” the missing piece of the puzzle, a new AI-focused version of Windows, is still a mystery right now.


Chips To Compute With Encrypted Data Are Coming

At first glance, it might seem impossible to do meaningful computation on data that looks like gibberish. But the idea goes back decades, and was finally made possible in 2009 by Craig Gentry, then a Stanford graduate student. Gentry found a way to do both addition and multiplication without calculation-killing noise accumulating, making it possible to do any form of encrypted computation. One comparison you can use to understand FHE is that it’s analogous to a Fourier transform. For those of you who don’t remember your college signal processing, a Fourier transform is a mathematical tool that turns a signal in time, such as the oscillation of voltage in a circuit, into a signal in frequency. One of the key side effects is that any math you can do in the time domain has its equivalent in the frequency domain. So you can compute in either time or frequency and come up with the same answer. The genius of fully homomorphic encryption is that it uses lattice cryptography— a form of quantum-computer-proof encoding—as the mathematical transformation. The problem with this approach is that the transformation leads to a big change in the type and amount of data and in the sorts of operations needed to compute. That’s where the new chips come in.


Ransomware Attackers Abuse Multiple Windows CLFS Driver Zero-Days

CLFS is a high-performance, general-purpose logging system available for user- or kernel-mode software clients. Its kernel access makes it eminently useful for hackers seeking low-level system privileges, and its performance-oriented design has left a series of security holes in its wake in recent years, which ransomware actors in particular have pounced on. ... Nothing in particular changed about the CLFS driver this year. Rather, attackers seem to have just now identified what was wrong with it this whole time: It leans too far left in that inescapable, eternal balance between performance and security. "CLFS is perhaps way too 'optimized for performance,'" Larin wrote, detailing all of the various ways the driver prioritizes it over protection. "It would be better to have a reasonable file format instead of a dump of kernel structures written to a file. All the work with these kernel structures (with pointers) happens right there in the blocks read from disk. Because changes are made to the blocks and kernel structures stored there, and those changes need to be flushed to disk, the code parses the blocks over and over again every time it needs to access something."



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - December 23, 2023

How LLMs made their way into the modern data stack in 2023

Beyond helping teams generate insights and answers from their data through text inputs, LLMs are also handling traditionally manual data management and the data efforts crucial to building a robust AI product. In May, Intelligent Data Management Cloud (IDMC) provider Informatica debuted Claire GPT, a multi-LLM-based conversational AI tool that allows users to discover, interact with and manage their IDMC data assets with natural language inputs. It handles multiple jobs within the IDMC platform, including data discovery, data pipeline creation and editing, metadata exploration, data quality and relationships exploration, and data quality rule generation. Then, to help teams build AI offerings, California-based Refuel AI provides a purpose-built large language model that helps with data labeling and enrichment tasks. A paper published in October 2023 also shows that LLMs can do a good job at removing noise from datasets, which is also a crucial step in building robust AI. Other areas in data engineering where LLMs can come into play are data integration and orchestration. 


Corporate governance in 2023: a year in review

2023 has seen a continuing trend of more responsibilities for directors. Often, this responsibility comes from regulators; sometimes, it comes from investors or other stakeholders. One thing is certain, though: directors are rapidly losing any remaining wiggle room to be “rubber-stamp” individuals. Modern board roles carry serious accountability; many directors are starting to appreciate that and adhere to new standards. The trouble is sometimes the new standard overstretch the director – so much so that we now have concerns about overboarding, exhaustion, and undue stress. How will that play out if the trend of more responsibility continues? ... The board dismissed the evidently popular CEO Sam Altman in a decision made behind closed doors with utmost secrecy. And as the world’s attention predictably turned their way, they could give no answers. Soon, Altman was rehired after around 70% of the company’s staff threatened to resign and join Microsoft (a significant OpenAI investor). The board subsequently agreed to undergo a major reshuffle for more accountability and transparent decision-making.


Quantum Computing’s Hard, Cold Reality Check

The problem isn’t just one of timescales. In May, Matthias Troyer, a technical fellow at Microsoft who leads the company’s quantum computing efforts, co-authored a paper in Communications of the ACM suggesting that the number of applications where quantum computers could provide a meaningful advantage was more limited than some might have you believe. “We found out over the last 10 years that many things that people have proposed don’t work,” he says. “And then we found some very simple reasons for that.” The main promise of quantum computing is the ability to solve problems far faster than classical computers, but exactly how much faster varies. There are two applications where quantum algorithms appear to provide an exponential speed up, says Troyer. One is factoring large numbers, which could make it possible to break the public key encryption the internet is built on. The other is simulating quantum systems, which could have applications in chemistry and materials science. Quantum algorithms have been proposed for a range of other problems including optimization, drug design, and fluid dynamics. 


Navigating the Data Landscape: The Crucial Role of Data Governance in Today’s Business Environment

Data quality management has become increasingly paramount as the volume of data exponentially raises day by day. Organizations can protect their data with policies and procedures, ensure that they follow all the rules and regulations, hire folks that understand the data you are collecting and what it means to their company but if that data isn’t high quality your organization may get the short end of the stick. Maybe you’re three weeks late for a TikTok trend or you miss out on a whole subset of customers because of the misstep with your collection methods, either way that profit loss and a chance to build on that data point in the future could be a pivotal misstep. Ensuring that your organization has processes to monitor and improve your data quality on a continuous basis will save your organization time and money in the long run. Despite its importance, implementing effective data governance comes with challenges. Organizations often face resistance to change, cultural barriers, and the complexity of managing diverse data sources.


Choosing Between Message Queues and Event Streams

There are numerous distinctions between technologies that allow you to implement event streaming and those that you can use for message queueing. To highlight them, I will compare Apache Kafka and RabbitMQ. I’ve chosen Kafka and RabbitMQ specifically because they are popular, widely used solutions providing rich capabilities that have been extensively battle-tested in production environments. ... Message queueing and event streaming can both be used in scenarios requiring decoupled, asynchronous communication between different parts of a system. For instance, in microservices architectures, both can power low-latency messaging between various components. However, going beyond messaging, event streaming and message queueing have distinct strengths and are best suited to different use cases. ... Message queueing is a good choice for many messaging use cases. It’s also an appealing proposition if you’re early in your event-driven journey; that’s because message queueing technologies are generally easier to deploy and manage than event streaming solutions. 


5G and edge computing: What they are and why you should care

Instead of relying solely on large, high-powered cell towers (as 4G does), 5G will run off both those towers and a ton of small cell sites that can be clustered together. This is how 5G achieves its population density. 5G is also supposed to be more energy efficient. As such, the communications component of IoT devices won't drain as much power, resulting in longer battery life for connected devices. There's also a ton of AI and machine learning in 5G implementations. 5G nodes and interface devices deployed on the edge, away from central hubs. They utilize AI and machine learning to analyze communications performance, and use AI to bandwidth-shape communications, to wring as much performance out of the hardware as possible. You're familiar with the term "cloud computing." We've all used cloud services, services that run on a server someplace rather than on our desktop computers or mobile devices. The cloud, of course, isn't really a cloud. Amazon, Google, Facebook, Microsoft, and others operate massive data centers packed with thousands upon thousands of servers. Soft and fluffy, the cloud is not.


Stolen Booking.com Credentials Fuel Social Engineering Scams

Social engineering expert Sharon Conheady said this type of trickery remains extremely difficult to repel, because of the customer-first nature of hospitality. Many public-facing people in such organizations, such as receptionists, are "trained to help people - that's their job," and of course they're going to bend over backwards to try to meet apparent customers' demands, Conheady said in an interview at this month's Black Hat Europe conference in London. Help desks remain another frequent target. "I had a client lately who asked me to call the help desk and obtain BitLocker keys," she said, referring to a recent penetration test. "Every single one of the help desk agents gave us the BitLocker key." That prompted her to ask: Do these personnel even know what a BitLocker key is, and why they shouldn't share it? The client said they didn't know. While training people in customer-facing roles can help, Conheady said the only truly effective approach would be to put in place strong technical controls to outright prevent and block such attacks.


Significantly Improving Security Posture: A CMMI Case Study

“Phoenix Defense has led the way in adopting CMMI Security best practices for nearly two decades, and now included the Security best practices,” says Kris Puthucode, Certified CMMI High Maturity Lead Appraiser at Software Quality Center LLC. “This adoption has yielded quantifiable benefits, enhancing security posture across Mission, Personnel, Physical, Process, and Cybersecurity domains. Additionally, incorporating Virtual work best practices has standardized virtual meetings and events, boosting efficiency.” Phoenix Defense has been a CMMI Performance Solutions Organization since 2005, first achieving Maturity Level 5 in 2020. ... Before adopting CMMI Security and Managing Security Threats and Vulnerabilities Practice Areas in the model, Phoenix Defense had a closed network with no outward-facing applications and relied on a third-party vendor to monitor threats and spam. They did not fully, quantitively track attacks against the networks or other data flows, and they required a more robust approach to properly ensure network security.


5 common data security pitfalls — and how to avoid them

While regulations like GDPR and SOX set standards for data security, they are merely starting points and should be considered table stakes for protecting data. Compliance should not be mistaken for complete data security, as robust security involves going beyond compliance checks. The fact is that many large data breaches have occurred in organizations that were fully compliant on paper. Moving beyond compliance requires actively identifying and mitigating risks rather than just ticking boxes during audits. ... Data is one of the most valuable assets for any organization. And yet, the question, “Who owns the data?” often leads to ambiguity within organizations. Clear delineation of data ownership and responsibility is crucial for effective data governance. Each team or employee must understand their role in protecting data to create a culture of security. ... Unpatched vulnerabilities are one of the easiest targets for cyber criminals. This means that organizations face significant risks when they can’t address public vulnerabilities quickly. Despite the availability of patches, many enterprises delay deployment for various reasons, which leaves sensitive data vulnerable.


Outmaneuvering AI: Cultivating Skills That Make Algorithms Scratch Their Head

Reasoning, the intellectual ninja of skills, is all about slicing through misinformation, assumptions, and biases to get to the heart of the matter. It’s not just drawing conclusions, but thinking about how we do that. This skill is the brain’s bouncer, keeping cognitive fallacies and hasty generalizations at bay. We humans, bless our hearts, are prone to jumping on the bandwagon or seeing patterns where there are none (like seeing a face on Mars or believing in hot streaks at Vegas). These mental shortcuts, or heuristics, can lead us astray, making reasoning not just useful but essential. AI is trained on our past reasoning reflected in old works. But it can’t reason on its own — at least not yet. Consider a business deciding whether to invest in a new technology. Without proper reasoning, they might follow the hype (everyone else is doing it!) or rely on gut feelings (it just feels right!). But with reasoning, they dissect the decision, weigh the evidence, consider alternatives, and make a choice that’s not just good on paper, but good in reality.



Quote for the day:

"Whether you think you can or you think you can’t, you’re right." -- Henry Ford

Daily Tech Digest - December 22, 2023

Healthcare Organisations Embrace New Technologies to Fortify Cyber Defences

Healthcare organisations have initiated partnership with others to develop security operations centres to monitor their traffic and identify threats. Proactive programs like threat hunting and brand monitoring have also been preferred. ... These initiatives are being taken keeping in mind the requirements from CERT-IN to report cyber incidents within six hours, and new requirements under Digital Personal Data protection Act, 2023, which require the organisation to take measures to identify sources of data, take consent and manage the use and eventual destruction of data as per the guidelines given by the government. “Investments in advanced IAM technologies are becoming paramount, encompassing robust authentication methods, privileged access controls, and continuous monitoring of user activities,” says Pramod Bhaskar, CISO, Cross Identity. These measures align closely with regulatory changes and compliance requirements, as regulations like HIPAA increasingly emphasise the importance of secure user authentication, access governance, and audit trails in safeguarding patient information.


The Window of Exposure: A Critical Component of Your Cybersecurity Strategy

The goal of any responsible security professional is to reduce the window of exposure as much as possible. There are two basic approaches to this: limiting the amount of vulnerability information available to the public and reducing the window of exposure in time by issuing patches quickly. Limiting the amount of vulnerability information available to the public might work in theory, but it is impossible to enforce in practice. There is a continuous stream of research in security vulnerabilities, and most of this research results in public announcements. Hackers write new attack exploits all the time, and the exploits quickly end up in the hands of malicious attackers. While some researchers might choose not to publish a vulnerability they discover, public dissemination of vulnerability information is the norm because it is the best way to improve security. Reducing the window of exposure in time by issuing patches quickly is the other approach. Full-disclosure proponents publish vulnerabilities far and wide to spur vendors to patch faster. 


MLflow vulnerability enables remote machine learning model theft and poisoning

Many developers believe that services bound to localhost — a computer’s internal hostname — cannot be targeted from the internet. However, this is an incorrect assumption according to Joseph Beeton, a senior application security researcher at Contrast Security, who recently held a talk on attacking developer environments through localhost services at the DefCamp security conference. Beeton recently found serious vulnerabilities in the Quarkus Java framework and MLflow that allow remote attackers to exploit features in the development interfaces or APIs exposed by those applications locally. The attacks would only require the computer user to visit an attacker-controlled website in their browser or a legitimate site where the attacker managed to place specifically crafted ads. Drive-by attacks have been around for many years, but they are powerful when combined with a cross-site request forgery (CSRF) vulnerability in an application. In the past hackers used drive-by attacks through malicious ads placed on websites to hijack the DNS settings of users’ home routers.


Chameleon Android Trojan Offers Biometric Bypass

The variant includes several new features that make it even more dangerous to Android users that its previous incarnation, including a new ability to interrupt the biometric operations of the targeted device, the researchers said. By unlocking biometric access (facial recognition or fingerprint scans, for example), attackers can access PINs, passwords, or graphical keys through keylogging functionalities, as well as unlock devices using previously stolen PINs or passwords. "This functionality to effectively bypass biometric security measures is a concerning development in the landscape of mobile malware," according to Threat Fabric's analysis. ... The malware's key new ability to disable biometric security on the device is enabled by issuing the command "interrupt_biometric," which executes the "InterruptBiometric" method. The method uses Android's KeyguardManager API and AccessibilityEvent to assess the device screen and keyguard status, evaluating the state of the latter in terms of various locking mechanisms, such as pattern, PIN, or password.


The Rise of AI-Powered Applications: Large Language Models in Modern Business

AI and LLMs have fundamentally altered how people and organizations interact with technology. While they drive innovation and automation across multiple sectors simultaneously, they also change how professionals make decisions and communicate with customers. They have redefined industry-specific domains while enhancing industrial growth and innovation potential. With further development and research, it is only a matter of time before these AI-driven models can replicate the qualities of human speech and interaction. There is no certainty as to the extent of AI developments and capabilities. While the potential for innovation and development seems endless, AI’s rapid growth in business and industry proves that developers have only reached the tip of the iceberg. As AI functionalities become faster and more proficient, the healthcare, education, and financial service industries will thrive further and deliver trustworthy, reliable care and services for patients, students, and customers worldwide. Because LLMs offer operational support in data and analytics, there will be cost savings as professionals transfer their time and efforts elsewhere. 


NIST Seeks Public Comment on Guidance for Trustworthy AI

This is the first time there has been an "affirmative requirement" for companies developing foundational models that pose a serious risk to national security, economic security, public health or safety to notify the federal government when training their models, and to share the results of red team safety tests, said Lisa Sotto, partner at Hunton Andrews Kurth and chair of the company's global privacy and cybersecurity practice. This will have a "profound" impact on the development of AI models in the United States, she told Information Security Media Group. While NIST does not directly regulate AI, it helps develop frameworks, standards, research and resources that play a significant role in informing the regulation and the technology's responsible use and development. Its artificial intelligence risk management framework released earlier this year seeks to provide a comprehensive framework for managing risks associated with AI technologies. Its recent report on bias in AI algorithms seeks to help organizations develop potential mitigation strategies, and the Trustworthy and Responsible AI Resource Center, launched in March, is a central repository for information about NIST's AI activities.


Why laptops and other edge devices are being AI-enabled

You can run them in the cloud, but as well as the inevitable latency this involves, it’s also increasingly costly both in terms of network bandwidth and cloud compute costs. There’s also the governance issue of sending all that potentially-sensitive and bulky data to and fro. So at the very least, doing a first-cut and filter to reduce and/or sanitise the transmitted data volume, is valuable in all sorts of ways. You could use the GPU or even the CPU to do this filtering, and indeed that’s what some edge devices will be doing today. Alternatively you could simply run the inferencing work on the local CPU or GPU in your laptop or desktop. That works, but it’s slower. Not only can dedicated AI hardware such as an NPU do the job much faster, it will also be much more power-efficient. GPUs and CPUs doing this sort of work tend to run very hot, as evidenced by the big heatsinks and fans on high-end GPUs. That power-efficiency is useful in a desktop machine, but is much more valuable when you’re running an ultraportable on battery, yet you still want AI-enhanced videoconferencing, speedy photo editing, or smoother gaming and AR.


Future of wireless technology: Key predictions for 2024

New IoT technology will help unify connectivity across multiple home devices, transforming home users’ experience with IoT devices. Matter— a new industry standard launched in 2023 provides reliable, secure connectivity across multiple device manufacturers. Given the weight of players involved (e.g., Apple, Amazon, Google, Samsung SmartThings), we expect the adoption of Matter-certified products will be exponential in the next three years, validating Wi-Fi’s central role in the smart connected home and buildings. Pilot projects and trials of TIP Open Wi-Fi will proliferate in developing countries and price-sensitive markets due to its cost-effectiveness and the benefits offered by an open disaggregated model. Well-established wireless local-area network (WLAN) vendors will continue working to make themselves more cost-effective in these markets through massive investment in machine learning and AI and an integrated Wi-Fi + 5G offering to enterprises. Augmented and virtual reality will gain a larger share of our daily lives at home and work


What developers trying out Google Gemini should know about their data

Google told ZDNET that it uses the API inputs and outputs to improve product quality. "Human review is a necessary step of the model improvement process," a spokesperson said. "Through review and annotation, trained reviewers help enable quality improvements of generative machine-learning models like the ones that power Google AI Studio and the Gemini Pro via the Gemini API." To protect developers' privacy, Google said their data is de-identified and disassociated from their API key and Google account, which is needed to log in to Google AI Studio. This protection takes place done before the reviewers can see or annotate the data. Google's Terms of Service (ToS) for its generative AI APIs further states that the data is used to "tune models" and may be retained in connection to the user's tuned models "[for] re-tuning when supported models change". The ToS states: "When you delete a tuned model, the related tuning data is also deleted." The terms also state that users should not submit sensitive, confidential, or personal data to the AI models.


14 in-demand cloud roles companies are hiring for

As cloud computing grows increasingly complex, cloud architects have become a vital role for organizations to navigate the implementation, migration, and maintenance of cloud environments. These IT pros can also help organizations avoid potential risks around cloud security, while ensuring a smooth transition to the cloud across the company. With 65% of IT decision-makers choosing cloud-based services by default when upgrading technology, cloud architects will only become more important for enterprise success. ... DevOps focuses on blending IT operations with the development process to improve IT systems and act as a go-between in maintaining the flow of communication between coding and engineering teams. It’s a role that focuses on the deployment of automated applications, maintenance of IT and cloud infrastructure ... Security architects are responsible for building, designing, and implementing security solutions in the organization to keep IT infrastructure secure. For security architects working in a cloud environment, the focus is on designing and implementing security solutions that protect the business’ cloud-based infrastructure, data, and applications.



Quote for the day:

"The meaning of life is to find your gift. The purpose of life is to give it away." -- Anonymous

Daily Tech Digest - December 21, 2023

The New HR Playbook: Catalyze Innovation With Analytics And AI

Metaverse and blockchain technologies — underpinned by data and AI — also offer a lot of possibilities for improving HR practices. The metaverse, a shared virtual space bridging physical and digital realities, offers avenues for remote workspaces and virtual collaboration. It can enhance recruitment, onboarding, training, and development processes by providing immersive and interactive experiences that engage candidates and employees on a new level. The metaverse could also help companies with decentralized teams cultivate a strong organizational culture by giving employees a shared virtual space for interaction and engagement. Blockchain technology offers transparency and security that can have profound implications for HR processes. HR departments can use blockchain to improve the security of record-keeping, verify employee credentials, and simplify benefits administration. Blockchain can also streamline payroll processes, especially for international employees. Companies can even use blockchain to create decentralized, employee-driven platforms for collaboration and communication.


Why 2024 will be the year of the CISO

As the ESG/ISSA research indicates, many fed-up CISOs will retire, while others will move on to become virtual CISOs (vCISOs) or take field CISO positions with security technology vendors. We'll read numerous stories next year about CISOs up and quitting on the spur of the moment. While the reasons won't be disclosed, you can bet they are among those cited above. Competition for qualified candidates will be fierce. On a side note, I don't believe there is a significant population of next-generation CISO candidates with the right experience to step up. In 2024, we will augment our general discussion of the global cybersecurity skills shortage with a specific addendum about the CISO shortage. CISO pay and compensation will rise precipitously. Aside from a handful of $1 million positions, CISOs aren't paid nearly as much as one might assume. Salary.com calculates a median salary of about $241,000 with 90% of CISOs making $302,000 or less. Given the job requirements (long hours, stress, being on-call, etc.), this isn't very much. With the competition for candidates, firms will greatly increase base pay, perks, and bonuses, leading to hyper CISO salary inflation.


Hot Jobs in AI/Data Science for 2024

“The new and highly specialized role known as the ‘LLM Engineer’ is primarily found within organizations that have reached an advanced stage in their AI journey, having conducted numerous experiments but now facing challenges in the operationalization of their AI models at scale,” says Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab. ... “Some of the most sought-after AI positions today include machine learning engineer, AI engineer, and AI architect,” says Shmuel Fink, Chair Master of Science in data analytics, Touro University Graduate School of Technology. “Nevertheless, several other AI roles are also gaining prominence, such as AI ethicist, AI product manager, AI researcher, computer vision engineer, robotics engineer, and AI safety engineer. Moreover, there are positions that require industry-specific expertise, like a healthcare AI engineer.” But back at the ranch, employees in any job role will become more valuable if they possess AI skills. As they gain those skills, some specialized job roles will evolve while others disappear.


How Blockchain Will Change Organizations

The fact that blockchain is a distributed database means it is very difficult to delete data. Once something has been recorded on the blockchain, it becomes part of the permanent record. This traceability of data is another key advantage of blockchain technology. The data stored on a blockchain is immutable, meaning that it cannot be changed or deleted. This traceability can be useful for tracking the provenance of goods and tracing the origins of data. It also has implications for compliance, as organizations will be able to show exactly what data they have and where it came from. ... Under the traditional centralized model, organizations have complete control over the data they store. However, individuals have full control over their data with blockchain technology. This is because each user has a private key, which is used to access their data. Individuals have complete control over their data, which is a key advantage of blockchain technology. It means that users can be sure that their data is safe and secure and that they can share it with whomever they choose.


Industry Impact: Celebrating IT's Milestones and Achievements This Year

The integration of AI into various solutions, including observability, IT service management, and database solutions, has allowed for greater automation of the mundane tasks that often bog down IT pros and hinder organizations from accelerating their digital transformations. AI-powered capabilities free up valuable time for IT pros, allowing them to focus on the most important tasks at hand. Autonomous operations, enabled by purpose-built models for IT operations and large language models, are poised to revolutionize IT environments in the coming years, reducing operation costs and bettering the lives of those in the tech workforce. ... The IT industry has a smorgasbord of accomplishments that have enriched the digital lives of organizations this year. The industry’s cloud migration journey, in particular, has played a central role in allowing organizations to scale their operations and pivot rapidly in response to market conditions. The cloud journey has transformed the way businesses operate, offering scalability, flexibility, and cost-efficiency. 


An IT Carol: How the Ghosts of IT Past and Present Can Help Improve the Future

You see yourself sitting at your desk, frantically trying to juggle more service desk tickets than you ever thought were possible. The trip to the future also shows the vast number of new complex systems that teams are using. As applications, networks, databases, and infrastructures grew in complexity, so did the tools and solutions we need to manage them. This has created a future where IT pros are trying to navigate and manage some of the most complex systems and environments imaginable. Teams are more overworked than ever before. You spend so much time fighting fires that you have no time to build better technology that provides important new capabilities. You have almost no time to think about anything else, let alone spend the holiday with family or friends. Thankfully, this is not a future that has to be, but rather one we can avoid if we take the right steps today. Right now, we are on the path to improving the lives of IT teams through the integration of artificial intelligence (AI). IT solutions powered by AI, such as observability and ITSM, can help manage the complex IT environments we are witnessing through ongoing digital transformation and the move to the cloud.


Why data, AI, and regulations top the threat list for 2024

Some of the essential questions security teams ought to be asking themselves include: How do we manage and safeguard aspects like confidentiality, integrity, and availability of data? What strategies can we employ to protect our data against cyber threats and misuse? How do we address the security challenges that emerge with expanding data repositories? How do we differentiate between valuable data and redundant information? Furthermore, there’s often a misalignment in how data is structured versus the business framework. Consequently, security teams may need to engage in discussions with business units to clarify issues such as how we are applying our data. With whom is this data being shared? ... Although AI technologies aren’t new, the recent widespread adoption of AI has introduced a myriad of business and security challenges for organizations. Key questions to consider include: How do we monitor AI usage within the organization? How do we regulate the data shared with AI systems by employees? How do we ensure ongoing compliance with ethical standards and legal requirements?


2023 - The year of transformation and harmonisation

Millennial leaders bring a distinctively dynamic, digitalised approach to their roles, characterised by agility, openness, proactiveness, and hands-on engagement. Their adeptness in navigating the digital landscape seamlessly allows them to forge strong connections within their predominantly Gen Z and millennial workforce. This workforce, in turn, embodies an informed, forward-looking, and tech-savvy ethos, driven by cutting-edge technologies that facilitate smart and efficient work practices. In the world of leading-edge technologies, the arrival of Chat GPT by OpenAI in the preceding November continued to take centre stage. Throughout the year, there was a surge in competition and discussions surrounding AI, particularly generative AI, which gained momentum. Amidst these discussions, Google's introduction of Bard added fervour to the debate, igniting intense conversations about the potential impact of generative AI on employment and the perceived threat to various job roles. This stirred a pot of mixed emotions—feelings of anxiety, uncertainty, and ambiguity swirled within the tech sphere.


Small businesses lead the way, while larger industries lag in tech adoption

On the other hand, many leaders in the small and mid-sized industrial sector are in the age group of 50 and above. When they initially embarked on their careers in the core industry, the adoption of IT and technology in their companies was significantly lower. Technology was not as pervasive, and IT integration was often considered an unnecessary expense. For those who did attempt computerisation in the early 2000s, the experience was often disheartening. Small IT companies that provided software solutions during that period often faced challenges and many even disappeared. The owners of these companies, faced with the uncertainty and challenges of running a technology-based business, opted for well-paying jobs instead. This experience left a lasting impact on their perception of technology and its role in business operations. Moreover, the proliferation of the internet and the rise of startups introduced a new paradigm. Many services and software were offered for free or at significantly reduced rates, fostering an expectation of inexpensive or cost-free technology solutions. This demotivated many software company owners from continuing in the business. 


What’s Ahead for AI In 2024: The Transformative Journey Continues

The coming year will see a shift in how generative AI is employed by businesses, with a greater emphasis on using organizational data. Companies are increasingly cautious about sharing sensitive data on public platforms, opting instead to host private foundation models within their four walls. This move is driven by concerns over data security and the desire to customize AI applications to specific organizational needs. By using their own data, companies can ensure that AI output is relevant and in context. This trend will lead to innovative applications of generative AI in a variety of business functions. ... New tuning techniques such as prompt tuning and retrieval augmented generation (RAG) will gain popularity next year. These methods provide more context-specific adjustments to AI models without the need for extensive retraining. Prompt tuning, for example, uses smaller pre-trained models to encode text prompts; RAG combines specific information with prompts to enhance the relevance of the model's output.



Quote for the day:

"People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - December 20, 2023

OpenAI announces ‘Preparedness Framework’ to track and mitigate AI risks

The announcement from OpenAI comes in the wake of several major releases focused on AI safety from its chief rival, Anthropic, another leading AI lab that was founded by former OpenAI researchers. Anthropic, which is known for its secretive and selective approach, recently published its Responsible Scaling Policy, a framework that defines specific AI Safety Levels and corresponding protocols for developing and deploying AI models.The two frameworks differ significantly in their structure and methodology. Anthropic’s policy is more formal and prescriptive, directly tying safety measures to model capabilities and pausing development if safety cannot be demonstrated. OpenAI’s framework is more flexible and adaptive, setting general risk thresholds that trigger reviews rather than predefined levels. ... Experts say both frameworks have their merits and drawbacks, but Anthropic’s approach may have an edge in terms of incentivizing and enforcing safety standards. From our analysis, it appears Anthropic’s policy bakes safety into the development process, whereas OpenAI’s framework remains looser and more discretionary, leaving more room for human judgment and error.


Australian federal government opens consultation on mandatory ransomware reporting obligation for businesses

The government is looking to develop legislation to "encourage" businesses to voluntarily provide information to ASD and the Cyber Coordinator about a cyber incident under a limited basis that would prevent the agencies from using this information for compliance action against the reporting organizations. The idea is to give more information than current regulation requires so the agencies can provide better support when businesses are under attack and to mitigate harms to individuals arising from cyber security incidents. ... Home Affairs t is seeking input from industry on the design and implementation of a cyber incident review board (CIRB). It is proposed that the CIRB would conduct no-fault incident reviews to reflect on lessons learned from cyber incidents, and share these lessons learned with the Australian public. The paper stated that the CIRB would not be a law enforcement, intelligence or regulatory body. It would be allowed to request information related to a cyber incident but would not have powers to compel and organization to do so. 


US Lawmakers Urge Pushback on EU’s Big Tech Crackdown

CIOs, CISOs, and other IT leaders should keep a watchful eye on the EU's regulatory developments, Martha Heller, CEO at executive search firm Heller Search, tells InformationWeek. “The EU’s legislative move to curtail the power of US tech companies is a double-edge sword,” she says in an email interview. “Its mandate that the largest US-based tech companies give users more choice among services could give smaller technology companies a fighting chance. But its bias against US tech companies could limit the US’s ability to compete on the global market.” Heller adds, “As both producers and enterprise consumers of technology, CIOs and CTOs should pay close attention to the EU, as it leverages its watchdog position.” ... For CIOs, keeping track of regulatory considerations is not getting easier moving forward. “You have five big US tech companies that are primarily affected,” Chin-Rothmann says. “You must look at that in context with all of the other digital laws globally. It’s going to be a pretty complex regulatory patchwork. And when the EU regulates, other countries tend to follow suit.


Web injections are back on the rise: 40+ banks affected by new malware campaign

Our analysis indicates that in this new campaign, threat actors’ intention with the web injection module is likely to compromise popular banking applications and, once the malware is installed, intercept the users’ credentials in order to then access and likely monetize their banking information. Our data shows that threat actors purchased malicious domains in December 2022 and began executing their campaigns shortly after. Since early 2023, we’ve seen multiple sessions communicating with those domains, which remain active as of this blog’s publication. Upon examining the injection, we discovered that the JS script is targeting a specific page structure common across multiple banks. When the requested resource contains a certain keyword and a login button with a specific ID is present, new malicious content is injected. Credential theft is executed by adding event listeners to this button, with an option to steal a one-time password (OTP) token with it. This web injection doesn’t target banks with different login pages, but it does send data about the infected machine to the server and can easily be modified to target other banks.


New Malvertising Campaign Distributing PikaBot Disguised as Popular Software

The latest initial infection vector is a malicious Google ad for AnyDesk that, when clicked by a victim from the search results page, redirects to a fake website named anadesky.ovmv[.]net that points to a malicious MSI installer hosted on Dropbox. It's worth pointing out that the redirection to the bogus website only occurs after fingerprinting the request, and only if it's not originating from a virtual machine. "The threat actors are bypassing Google's security checks with a tracking URL via a legitimate marketing platform to redirect to their custom domain behind Cloudflare," Segura explained. "At this point, only clean IP addresses are forwarded to the next step." Interestingly, a second round of fingerprinting takes place when the victim clicks on the download button on the website, likely in an added attempt to ensure that it's not accessible in a virtualized environment. Malwarebytes said the attacks are reminiscent of previously identified malvertising chains employed to disseminate another loader malware known as FakeBat (aka EugenLoader).


SSH shaken, not stirred by Terrapin vulnerability

As the university trio put it this week, a successful Terrapin attack can "lead to using less secure client authentication algorithms and deactivating specific countermeasures against keystroke timing attacks in OpenSSH 9.5." In some very specific circumstances, it could be used to decrypt some secrets, such as a user's password or portions of it as they log in, but this is non-trivial and will pretty much fail in practicality. Let's get to the nitty gritty. We'll keep it simple; for the full details, see the paper. When an SSH client connects to an SSH server, before they've established a secure, encrypted channel, they will perform a handshake in which they exchange information about each other in plaintext. Each side has two sequence counters: one for received messages, and one for sent messages. Whenever a message is sent or received, the relevant sequence counter is incremented; the counters thus keep a running tally of the number of sent and received messages for each side. As a MITM attack, Terrapin involves injecting a plaintext 'ignore' message into the pre-secure connection, during the handshake, so that the client thinks it came from the server and increments its sequence counter for received messages. The message is otherwise ignored.


SMTP Smuggling Allows Spoofed Emails to Bypass Authentication Protocols

Using SMTP Smuggling, an attacker can send out a spoofed email purporting to come from a trusted domain and bypass the SPF, DKIM and DMARC email authentication mechanisms, which are specifically designed to prevent spoofing and its use in spam and phishing attacks. An analysis found that the attack technique could allow an attacker to send emails spoofing millions of domains, including ones belonging to high-profile brands such as Microsoft, Amazon, PayPal, eBay, GitHub, Outlook, Office365, Tesla, and Mastercard. The attack was demonstrated by sending spoofed emails apparently coming from the address ‘admin(at)outlook.com’. However, attacks against these domains are possible — or were possible, because some vendors have applied patches — due to the way a handful of major email service providers set up SMTP servers. The vendors identified by the researchers are GMX (Ionos), Microsoft and Cisco. The findings were reported to these vendors in late July. GMX fixed the issue after roughly 10 days. Microsoft assigned it a ‘moderate severity’ rating and rolled out a patch sometime in the middle of October.


Digital Transformation: Composable Applications And Micro-Engagements

Composable applications are characterized by one simple concept. Organizations are evolving beyond the method of integrating low-level services, and they’re gravitating to consuming higher-level micro-engagements. Micro-engagements are defined as small, repeatable experiences that can be preconfigured and consumed within a larger environment. Organizations are questioning why they need to re-create the wheel (or in this instance, the experience) using low-level services. Why can’t they simply leverage commonly repeatable experiences and lower their overall technical debt while increasing overall agility? ... Once embraced, organizations adopting the composable application mindset will be biased toward vendors who provide use cases or process-specific micro-engagements. Out-of-the-box micro-engagements can be quickly and easily discovered, evaluated, integrated, branded and deployed with minimal effort and risk, and vendors that provide no-code platforms can enable organizations to quickly and easily create their own reusable micro-engagements.


CISO: Your Tech Security Guide

Every business, regardless of size, necessitates a security leader overseeing technology, information, and data security, even if not designated as a CISO. While midsize and larger enterprises commonly appoint a CISO within their C-suite, smaller businesses may delegate such responsibilities to a tech executive like a director of cybersecurity. Some smaller or startup enterprises opt to outsource the CISO role, enhancing protection for their intellectual property, data, and IT infrastructure. ... A CISO’s contribution lies in their comprehensive understanding of security, connecting various security facets with the organization’s IT systems and networks. They leverage this perspective to pinpoint security risks and devise effective management strategies. Successful CISOs adeptly articulate complex security issues in layman’s terms, enabling leadership to grasp the implications. ... Becoming a CISO involves understanding cybersecurity’s technical foundations alongside practical management principles, encompassing people, processes, and technology. Critical attributes include a fervor for information technology, commitment to ongoing learning, adept leadership, familiarity with security standards, and relevant certifications (CISSP, CISM).


Is Your Product Manager Hurting Platform Engineering?

Having a product manager from day one can lower oxygen levels for your platform team. Feedback may be filtered, delayed or misunderstood, massively reducing its value and making good outcomes less likely. Platform engineers should bathe in the full, grainy details of the feedback and use it to enrich their understanding of the tasks their customers are trying to complete — and where they are underserviced when completing those jobs. This helps the platform team create innovative solutions that may solve multiple unmet needs. You don’t have to use the Jobs To Be Done (JTBD) framework here. The crucial detail is that by immersing yourself in the customer’s needs, you can come up with ideas that solve many pain points instead of falling into the feature-factory trap of solving problem after problem. ... While it’s tempting to think ahead to what happens when your platform has achieved total adoption, been spun into a subsidiary organization, and had a conference named after it, it’s worth understanding that scale is not why you’ll add a product manager.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard