Daily Tech Digest - November 15, 2023

The IT Jobs AI Could Replace and the Ones It Could Create

Knowledge base managers and data scientists will be essential roles for enterprises as more and more data is fed into large language models (LLMs). “It's still a garbage in, garbage out problem, and if AI will now do more of our work, what we feed them is more important than ever,” says Katz. De Ridder expects to see prompt engineering to emerge as an important skill in the IT field rather than a distinct job. He describes new jobs that could come of the AI boom: agent and multiagent engineers. Agent engineers would maintain and adjust the AI agent processes, while multi-agent system engineers would function as project managers overseeing the complex processes and outcomes supported by multiple AI agents. These jobs will have myriad specializations tied to different fields, according to De Ridder. As more and more AI use cases emerge, IT workers could increasingly be looked at as AI co-pilots. How will they work alongside this technology to improve productivity, and how will they oversee AI capabilities to ensure the desired outcomes?


Microsoft Zero-Days Allow Defender Bypass, Privilege Escalation

But as with every Microsoft monthly update, there are several bugs in the latest batch that security experts agreed merit greater attention than others. The three actively exploited zero-day bugs fit that category. One of them is CVE-2023-36036, a privilege escalation vulnerability in Microsoft's Windows Cloud Files Mini Filter Driver that gives attackers a way to acquire system-level privileges. Microsoft has assessed the vulnerability as being a moderate — or important — severity threat but has provided relatively few other details about the issue. Satnam Narang, senior staff research engineer at Tenable, identified the bug as something that is likely going to be of interest to threat actors from a post-compromise activity standpoint. An attacker requires local access to an affected system to exploit the bug. The exploitation involves little complexity, user interaction, or special privileges. Windows Cloud Files Mini Filter Driver is a component that is essential to the functioning of cloud-stored files on Windows systems, says Saeed Abbasi, manager of vulnerability and threat research at Qualys. 


How to infuse strategy into everything your company does

The strategic goal-setting landscape is evolving, moving beyond global companies like Patagonia. It’s shifting from top-down mandates to a dynamic, bidirectional model that fosters ambition and collaboration at all levels. In highly successful organizations like LeanIX, an enterprise architecture management firm, we have watched how OKRs have been both a philosophy and a recipe for success and growth. LeanIX’s use of OKRs is not just a way to break down the company’s strategy and to agree on a common focus for the quarter; it’s an integral part of adopting a growth mindset. This ensures that the entire organization is continuously thinking big, aiming high, and trying out new approaches to achieve the next significant leap. ... Contemporary boardrooms have to echo the aspirations and values of Gen Z, emphasising both diversity and innovation. Merely having organizational strategies and cultural values framed and displayed on walls won’t suffice. They must be actively lived and practiced. Over a third of Gen Z expect leaders to not just lead but inspire. They demand a transparency that goes beyond open communication. 


The Art of Digital Continuity: Ensuring Data Availability in Disasters

During disasters, managers and IT employees bear the emotional burden of maintaining a calm and efficient work environment. This emotional labor can lead to stress and burnout, so managing it is key to maintaining productivity and data security during disasters. Here are some ways these professionals can cope with the emotional toll: Communication - Open and honest communication about the disaster’s impact is key for managing emotions. Keeping employees informed can help them feel more in control of the situation. Support - Providing psychological support, such as counseling or mental health resources, can help employees cope with stress and anxiety during a disaster. Training - Prioritizing training on disaster response and emotional management can prepare IT professionals for high-stress situations better. ... Remember that disaster preparedness is not a one-time effort — it requires continuous monitoring, testing, and adaptation to protect valuable data. When disaster strikes and data is lost, the first step is to create a new and improved information security plan. 


Four Levels of Agile Requirements

Visioning: This is the initial step of gathering requirements. The goal is to help identify all the Themes and some features desired. This exercise begins to define the scope of what is expected. Brainstorming: The goal of this step is to identify all the features and stories desired. The key here is Breadth First, Depth Later. So instead of discussing the details of each feature and story, our main goal is to FIND all the features and stories. Breakdown: The goal of this step to break down and slice the stories that are still too large (EPICs) into smaller chunks. You probably have already done a lot of slicing during brainstorming, but as you comb your backlog, the team will realize that some stories are still too large to be completed within an iteration. Slicing stories is an art and I will dedicate an entire blog to it! Deep Dive: This is the step everyone wants to jump into right away! Yes, finally, let’s talk about the details. What will be on the screen, what are the exact business rules and how will we test them, what will the detailed process look like, what are the tasks we need to get done to complete this story.


Dynamic Availability: Protocol-Based Assurances

The distinctive feature of proof-based consensus protocols is the fact that the protocol continues to function even when there is only one miner. Therefore miner nodes are free to leave and re-enter the competition at any time. Thus, the protocol maintains availability even under undesirable network conditions. To deal with cases where there are multiple leaders (concurrent solvers of the puzzle), honest nodes follow a simple rule: select the ledger with the highest number of blocks (i.e., the longest chain). In cases where chains have equal lengths, pick the one that you witnessed the earliest. Note that, in the given scenario, there is no way to determine whether there is a set of adversaries that are processing a parallel ledger without informing the rest of the network until their ledger becomes longer than the chain of the benevolent node. When they have a longer chain, they reveal their chain, waiting for the rest of the network to adapt to it, thus effectively ignoring all transactions that were in the neglected blocks. Due to this, one can never be sure whether a transaction is irreversible.


Are firms using mergers and acquisitions to inherit talent?

“I don’t think there’ll be an explosion in the number of acquisitions over the year ahead, but the people and team acquisition element will play a bigger role than in the past,” she says. “Technology is moving so fast that if you acquire a team already working well together on bleeding-edge technology, you can be up and running from day one.” But purchasing a business to get hold of talent is one thing. Holding onto that talent to deliver on the hoped-for value from the acquisition is quite another. The problem here is that if employees are unhappy with the move, feel uncertain about the future, or cannot see any post-deal career progression opportunities, they will simply vote with their feet. ... A key problem with the way many M&A transactions are conducted though, he believes, is that “people tend to come last on the priority list after financing and geography” - even though “you’re asking them to do the equivalent of move home, which because the decision isn’t theirs, can feel threatening”. But Robbins warns: “You fundamentally need to retain people, skills and capabilities if the deal is going to be a success. The business depends on two things - its customers and its staff, and if you’re not giving them what they want, it’s not going to go well.”


Why the Future for Enterprise Success Has to be Agile

Agile solutions enable enterprises to mitigate risks and reduce project failures, gaining a competitive edge and seize new opportunities in the digital age. Through iterative development and continuous feedback cycles, organizations can identify and address potential issues early on. This piece-by-piece approach minimizes the likelihood of costly mistakes and allows for corrections and updates in real-time, ensuring successful project delivery. Working in an Agile way also means that enterprises can be better prepared for the hype points in technology, such as the boom of generative AI this year. Agile enterprises are much better positioned to react and readjust their offerings in real time, addressing the interests of their market, than those with lengthy, drawn-out development timelines. This isn’t to say that Agile enterprises aren’t planning ahead, but instead that they follow a test-and-learn approach, with their plans being flexible and malleable to the ebbs and flows of the market.


Developer Empowerment Via Platform Engineering, Self-Service Tooling

“As a developer the way we build, test and deploy has gotten more complex,” Medina said, in her role play as a developer, lamenting her loss of autonomy in this time of public cloud, serverless workloads and Kubernetes. “Unfortunately that means that, as a developer, if I want to have access to the things that I need when I want them, I’m at the mercy of other teams to bring things up for me. I’m at the mercy of the platform engineering team and I hate waiting for people to do things for me,” she said. Indeed a platform engineering team never is short on backlog items. But often they are stuck performing the operations role so much that they aren’t able to build those golden paths and automation. “OK, as platform engineers, we have the keys to the so-called cloud kingdom, but, listen, it’s not all about you. It’s not all about DevEx. We also have to maintain reliable systems. And it’s too much work and we are super stressed. We are at the point where we are drowning in Jira tickets,” Villela replied, wearing the hat of a platform engineer.


Understanding OWASP’s Bill of Material Maturity Model: Not all SBOMs are created equal

Much as with other industry efforts such as zero trust, the journey towards establishing widespread mature BOMs with sufficient detail and depth will be just that — a journey. That said, resources such as OWASP's SBOM Guide and the BOM Maturity Model can serve as great tools that organizations, software suppliers and consumers can use to mature their implementation of SBOMs and ensure they are providing sufficient insight and details to be used in activities such as software asset inventory, vulnerability management and software supply chain security. ... While the journey may seem daunting, the alternative is continuing the historical status quo of blind software consumption with limited transparency and insight into the software we are consuming, its lineage, who's been involved in it and what has occurred to it along the way. We wouldn't settle for this level of opaque risky consumption in other industries such as food and pharmaceuticals and with software increasingly driving nearly every aspect of society, we shouldn't settle for a lack of transparency here either.



Quote for the day:

"Difficulties strengthen the mind, as labor does the body." -- Seneca

Daily Tech Digest - November 14, 2023

Balancing act: CISOs knife-edge role in modern cybersecurity

Enhanced personal liability and duty of care are becoming increasingly unavoidable for many industries under the NIS2 (Network and Information Systems Directive) - a directive to set higher standards for cybersecurity across the European Union - and DORA (Digital Operational Resilience Act). This change is unnerving for CISOs as their role is officially recognized by regulators, shareholders, and customers. 62% cited concerns about personal liability in a recent global survey by Proofpoint, demonstrating the increased pressures of the role. ... Cybercriminals are already experienced users of AI, with ransomware producers incorporating AI and machine learning techniques into their malware while using it to target specific victims and evade antivirus software detection. Such use of advanced technology is expected to continue as ransomware developers become more proficient in their tactics and multiply the challenges CISOs will face. While AI can automate threat detection and response, it requires an understanding of past threat activity. 


Exploring the Role of Consensus Algorithms in Distributed System Design

Consensus, in the context of distributed systems, is the act of getting a group of nodes to agree on a single value or outcome, even if failures and network delays occur. This agreement is vital for the proper functioning of distributed systems, for it ensures that all nodes operate cohesively and consistently, even when they are geographically dispersed. ... At the heart of many consensus algorithms is the concept of Leader election, as it establishes a single node responsible for coordinating and making decisions on behalf of the group. In other words, this leader ensures that all nodes in the system agree on a common value or decision, promoting order and preventing conflicts in distributed environments. Fault tolerance is a critical aspect of consensus algorithms as well, as it allows systems to continue functioning even in the presence of node failures, network partitions, or other unforeseen issues. Consistency, reliability, and fault tolerance are among the primary guarantees offered. 


Rogue state-aligned actors are most critical cyber threat to UK

These groups have become emboldened to act with impunity regardless of whether or not they have Russia’s official backing, and the NCSC said it had “concerns” that these groups have a higher risk appetite than those advanced persistent threat (APT) actors – such as Sandworm – that operate as units of the Russian intelligence and military services. This makes them a far more dangerous threat because they may seek to attack CNI operators without constraint and without being able to fully understand, or control, the impact of their actions. The consequences of this could be exceptionally severe. At the same time, Russian APTs continue to advance their goal of weakening and dividing Moscow’s adversaries by interfering in the democratic process using mis- and disinformation and cyber attacks. ... Of particular concern next go round will be large language models (LLM), which will almost certainly be used to generate fabricated content and deepfakes before the election, and a developing trend of targeting the email accounts of prominent individuals, as previously reported.


Fostering an automation-driven operations mindset in enterprises

By embracing automation, companies are changing the way they operate. This can mean rethinking their entire business model to become more profitable and competitive. However, this change is not always easy. Businesses face various challenges, such as dealing with disruptions in the market, figuring out the right number of employees needed for their operations, and keeping up with the ever-changing market conditions. Businesses are recognising that in order to stay relevant and successful, they need to undergo a digital transformation. This means adopting new technologies and ways of doing things to achieve significant positive changes in their operations. Automation has the power to create these changes across all types of industries, including retail, logistics, manufacturing, and the BFSI sector. ... This shift is so significant that the market for industrial automation in India is expected to double from USD 13.23 billion in 2023 to USD 25.76 billion by 2028. This is a clear indication that companies are investing heavily in automation to ensure they remain competitive and up to date with the latest advancements.


MongoDB vs. ScyllaDB: A Comparison of Database Architectures

The MongoDB architecture enables high availability through the concept of replica sets. MongoDB replica sets follow the concept of primary-secondary nodes, where only the primary handles the write operations. The secondaries hold a copy of the data and can be enabled to handle read operations only. A common replica set deployment consists of two secondaries, but additional secondaries can be added to increase availability or to scale read-heavy workloads. MongoDB supports up to 50 secondaries within one replica set. Secondaries will be elected as primary in case of a failure at the former primary. ... Unlike MongoDB, ScyllaDB does not follow the classical relational database management system (RDBMS) architectures with one primary node and multiple secondary nodes, but uses a decentralized structure, where all data is systematically distributed and replicated across multiple nodes forming a cluster. This architecture is commonly referred to as multiprimary architecture. A cluster is a collection of interconnected nodes organized into a virtual ring architecture, across which data is distributed. 


Relationship management: The unsung art of optimizing IT teams

Getting the most out of IT staff and unleashing synergies among IT teams is among the more underappreciated skills an IT leader must have to optimize their organization’s efforts. And for that you must develop an uncanny knack for relationship management and an understanding of how differing personalities can enforce and work with one another to great effect. After all, IT brings together a diverse range of personalities, from statisticians, mathematicians, and developers who are rooted in the rigors of computer science, to liberal arts majors who might just as soon be writing a novel if it could pay the bills. So, how do you as an IT leader unify these wide-ranging personalities into a cohesive project team? The short answer is that you don’t try to change anyone. Instead, you seize on common goals most team members have: To see success, feel good about the work they do, and contribute in ways that play to their strengths — while avoiding what they find off-putting or unproductive.


As perimeter defenses fall, the identify-first approach steps into the breach

An identity-first strategy is all about knowing the identity of all humans and non-humans accessing points within the enterprise. In other words, the strategy calls for the organization to know each employee, contractor, and business partner as well as endpoint, server, or application that seeks to connect. It is often also called identity-centric or identity-first security. It's foundational to implementing zero trust because zero trust says trust no entity until that entity — whether human or machine — can authenticate that it is who it says it is and can verify it has been authorized to access the network, application, API, server, etc. that it's seeking to access. ... As Avijit explains, no single solution delivers an identity-first strategy. Rather, it requires a synthesis of policies, practices and technology — like nearly everything else in cybersecurity. Those elements must come together to achieve three key objectives, says Henrique Teixeria, senior director analyst at Gartner, a research and advisory firm. 


Collaborative strategies are key to enhanced ICS security

Cooperation between IT (information technology) and OT (operational technology) departments is extremely important to address unique security challenges in industrial sectors. The IT department is usually responsible for managing computer systems, networks, and data, while the OT department manages operating systems, industrial control systems, and sensors. Synergy between these departments allows for a better understanding and confrontation of threats involving industrial control systems. IT teams have expertise in information security, and OT teams have years of experience working with industrial systems. By combining the knowledge of both departments, one can proactively identify and address security vulnerabilities and threats. The advantages of training these departments with each other are many. First, understanding both aspects – INFORMATION and industrial technology – allows for more effective identification and analysis of security challenges that are specific to the industrial sectors. 


3 cybersecurity compliance challenges and how to address them

Changes in regulations can be as rapid as the introduction of new products or the emergence of new threats and attacks. Thus, organisations need to be agile enough to keep up with regulatory changes. Unfortunately, not many of us have the ability to do this on our own. Cybersecurity skills shortage continues to be a problem when it comes to compliance. Many organisations lack the right people to properly address cyber threats, let alone continuously monitor regulatory changes. The challenge of keeping up with changing regulations can be addressed with the help of resources that track updates for you. Often, these are related to specific business niches. For companies involved in credit and financial service operations, for example, the cybersecurity alerts of the National Association of State Credit Union Supervisors (NASCUS) provide up-to-date information on the latest regulations that affect those in the business of extending credit and other financial services. There are also regulation monitoring subscription services that provide updates on regulations in general. 


Ethical Considerations in AI and Cloud Computing: Ensuring Responsible Develop and Use

Transparency and ethics go hand in hand. With AI, transparency is an essential ethical practice that plays a role in meaningful consent, accountability, and algorithmic auditing. Transparency is essential for driving public acceptance and trust in AI. AI has been accused of having a “black box” problem, referring to the lack of transparency in how it operates and the logic behind its decisions. The use of complex algorithms and proprietary systems contributes to the problem. Ethical practices must address the black box issue by ensuring a high level of transparency in AI development and deployment. ... Assigning responsibility for the outcomes provided by AI-driven systems is perhaps the most important ethical consideration to be considered. If an AI-powered system guiding medical diagnosis makes a decision that leads to failed medical treatment, who should take responsibility? Is the AI developer, the technology firm that deployed the AI, or the doctor ultimately accountable for the bad information?



Quote for the day:

"A leader is one who sees more than others see, who sees farther than others see and who sees before others see.” -- Leroy Eimes

Daily Tech Digest - November 13, 2023

Navigating the Crossroads of Data Confidentiality and AI

Striking a balance between ensuring data privacy and maximizing the effectiveness of AI models can be quite complex. The more data we utilize for training AI systems, the more accurate and powerful they become. However, this practice often clashes with the need to safeguard privacy rights. Techniques like federated learning offer a solution by allowing AI models to be trained on data sources without sharing raw information. For the uninitiated, Federated Learning leverages the power of edge computing to train local models. These models use data that never leaves the private environment (like your phone, IoT devices, corporate terminals, etc.). Once the local models are trained, they are then leveraged to build a centralized model that can be used for related use cases. ... Due to the recent acceleration in the adoption of AI, government regulations play a pivotal role in shaping the future of AI and data confidentiality. Legislators are increasingly recognizing the significance of data privacy and are implementing laws such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA).


CISOs vs. developers: A battle over security priorities

“Developers and CISOs juggle numerous security priorities, often conflicting across organizations,” noted Luke Shoberg, Global CISO at Sequoia Capital. “The report emphasizes the need for internal assessments, fostering deeper collaboration, and building trust among teams managing this critical domain. Recognizing technical and cultural obstacles, organizations have made significant strides in understanding the importance of securing the software supply chain for sustained business success.” “The world of software consumption and security has radically changed. From containers to the explosion of open source components, every motion has been toward empowering developers to build faster and better,” said Avon Puri, Global Chief Digital Officer at Sequoia Capital. “But with that progress, the security paradigm has been challenged to refocus on better controls and guarantees for the provenance of where software artifacts come from and that their integrity is being maintained. The survey shows developers and security teams are wrestling with this new reality in the wake of major exploits like Log4j and SolarWinds.


Deception technology use to grow in 2024 and proliferate in 2025

It's worth mentioning that all scanning, data collection, processing, and analysis will be continuous to keep up with changes to the hybrid IT environment, security defenses, and the threat landscape. When organizations implement a new SaaS service, deploy a production application, or make changes to their infrastructure, the deception engine notes these changes and adjusts its deception techniques accordingly. Unlike traditional honeypots, burgeoning deception technologies won't require cutting-edge knowledge or complex setup. While some advanced organizations may customize their deception networks, many firms will opt for default settings. In most cases, basic configurations will sufficiently confound adversaries. Remember, too, that deception elements like decoys and lures remain invisible to legitimate users. Therefore, when someone goes poking at a breadcrumb or canary token, you are guaranteed that they are up to no good. In this way, deception technology can also help organizations improve security operations around threat detection and response.


What Role Will Open-Source Hardware Play in Future Designs?

The extent of open-source hardware’s impact on electronics design is still uncertain. While it could likely lead to all these benefits, it also faces several challenges to mainstream adoption. The most significant of these is the volatility and high costs of the necessary raw materials. Roughly 70% of all silicon materials come from China. This centralization makes prices prone to fluctuations from local disruptions in China or throughout the supply chain. Similarly, long shipping distances raise related prices for U.S. developers. Even if integrated circuit design becomes more accessible, these costs keep production inaccessible, slowing open-source devices’ growth. Similarly, industry giants may be unwilling to accept the open-source movement. While open-source designs open new revenue streams, these market leaders profit greatly from their proprietary resources. The semiconductor fabs supporting these large companies are even more centralized. It may be difficult for open-source hardware to compete if these organizations don’t embrace the movement.


How Should Developers Respond to AI?

“Unionizing against AI” wasn’t a specific goal, Quick clarified in an email interview with The New Stack. He’d meant it as an example of the level of just how much influence can come from a united community. “My main thought is around the power that comes with a group of people that are working together.” Quick noted what happened when the United Auto Workers went on strike. “We are seeing big changes happening because the people decided collectively they needed more money, benefits, etc. I can only begin to guess at what an AI-related scenario would be, but maybe in the future, it takes people coming together to push for change on regulation, laws, limitations, etc.” Even this remains a concept more than any tangible movement, Quick stressed in his email. “Honestly, I don’t have much more specific actions or goals right now. We’re just so early on that all we can do is guess.” But there is another scenario where Quick thinks community action would be necessary to push for change: the hot-button issue of “who owns the code.” 


Security, privacy, and generative AI

For many of the proposed applications in which LLMs should excel, delivering false responses can have serious consequences. Luckily, many of the mainstream LLMs have been trained on numerous sources of data. This allows these models to speak on a diverse set of topics with some fidelity. However, there is typically insufficient knowledge around specialized domains in which data is relatively sparse, such as deep technical topics in medicine, academia, or cybersecurity. As such, these large base models are typically further refined via a process called fine-tuning. Fine-tuning allows these models to achieve better alignment with the desired domain. Fine-tuning has become such a pivotal advantage that even OpenAI recently released support for this capability to compete with open-source models. With these considerations in mind, consumers of LLM products who want the best possible outputs, with minimal errors, must understand the data in which the LLM is trained (or fine-tuned) to ensure optimal usage and applicability.


How to keep remote workers connected to company culture

As important as workplace collaboration and communication tools are, technology alone can’t keep remote workers engaged with business objectives. Before the pandemic, auto finance firm Credit Acceptance centered its operations around in-person interactions in its offices, for which it got accolades; after COVID-19 arrived, the company’s 2,200 employees had to work remotely. “You didn't work from home at all – [only in] rare circumstances,” said Wendy Rummler, chief people officer at Credit Acceptance. “We considered our culture too important, [we believed that] we couldn't maintain it if we had a fully remote workforce, or even partially for that matter.” Fast forward a couple of years and the picture is markedly different now, with almost all staffers now fully remote. Internal pulse surveys have found that employee engagement has remained as high as before the pandemic, said Rummler. This is no accident, she said; Credit Acceptance deliberately set out to maintain its work culture without regular person-to-person interactions.


Should AI Require Societal Informed Consent?

The concept of societal informed consent has been discussed in engineering ethics literature for more than a decade, and yet the idea has not found its way into society, where the average person goes about their day assuming that technology is generally helpful and not too risky. In most cases, technology is generally helpful and not too risky, but not in all cases. As artificial intelligence grows more powerful and is applied to more new fields (many of which may be inappropriate), these cases will multiply. How will technology producers know when their technologies are not wanted if they never ask the public? ... One of the characteristics of a representative democracy is that -- at least in theory -- our elected officials are looking out for the well-being of the public. ... It is time for the government and the public to have a new conversation, one about technology -- specifically artificial intelligence. In the past we’ve always given technology the benefit of the doubt; tech was “innocent until proven guilty” and a long-time familiar phrase in and around Silicon Valley has been “it’s better to ask forgiveness, not permission.” We no longer live in that world.


Harnessing the potential of generative AI in marketing

Augmenting human creativity with the power of generative AI holds so much promise that the use cases we know now are only the tip of the proverbial iceberg. Companies that are looking to get a head start should, therefore, ensure that they have laid down the foundations for doing so. An important consideration in deploying generative AI is the availability of data. Contextualisation is a key benefit of generative AI and large language models (LLMs). But for enterprises with legacy, on-premise systems, their data is usually isolated within silos. Organisations looking to deploy generative AI solutions for their marketing efforts should leverage cloud data platforms to unify all their internal data. Aside from breaking down silos, businesses should also ensure seamless access to all their data. A lot of the data generated by marketing teams is either unstructured or semi-structured; such as social media posts, emails, and text documents, to name a few. Marketing teams should ensure that their cloud data platforms can load, integrate, and analyse all types of data.


Managing Missing Data in Analytics

Missing at Random (MAR) is a very common missing data situation encountered by data scientists and machine learning engineers. This is mainly because MCAR and MNAR-related problems are handled by the IT department, and data issues are addressed by the data team. MAR data imputation is a method of substituting missing data with a suitable value. Some commonly used data imputation methods for MAR are:In hot-deck imputation, a missing value is imputed from a randomly selected record coming from a pool of similar data records. In hot-deck imputation, the probabilities of selecting the data are assumed equal due to the random function used to impute the data. In cold-deck imputation, the random function is not used to impute the value. Instead, other functions, such as arithmetic mean, median, and mode, are used. With regression data imputation, for example, multiple linear regression (MLR), the values of the independent variables are used to predict the missing values in the dependent variable by using a regression model. Here, first the regression model is derived, then the model is validated, and finally the new values, i.e., the missing values, are predicted and imputed. 



Quote for the day:

"Failure isn't fatal, but failure to change might be" -- John Wooden

Daily Tech Digest - November 12, 2023

The metaverse has virtually disappeared. Here's why it's generative AI's fault

"It's basically going through the Gartner Hype Cycle for Emerging Technologies," she says. "We've had the hype and now we're seeing the reality. The metaverse was capturing people's imagination. But we're still looking for proven use cases that are going to generate value." Searle's assertion that the metaverse is suffering a familiar fate to other over-hyped technologies is certainly one explanatory factor for the drop in interest in the metaverse. But another huge contributory factor is the rapid rise of artificial intelligence (AI). ... Of course, the rapid take up of generative AI isn't the only narrative in this story; there's a whole series of potential concerns, such as hallucinations, plagiarism, and ethics, that need to be dealt with sooner rather than later. But if you want to impress your family and friends with a tool that seems to work like magic, then generative AI is the one. On the other hand, the metaverse -- just like the blockchain before it -- feels a bit like a rabbit that's stuck in a magician's hat. Entering the metaverse often isn't as easy as its proponents have promised. 


Why the service industry needs blockchain, explained

The difficulty of integrating blockchain with existing infrastructure and processes is a significant obstacle. Because service providers frequently use a variety of platforms and technologies, achieving seamless integration can be difficult. It might be difficult to protect data security and privacy while still adhering to regulations. Blockchain’s transparency conflicts with the requirement to protect sensitive customer information, necessitating careful design and implementation of privacy measures. Another major challenge is establishing communication and data exchange across various blockchain networks and traditional systems. To facilitate seamless interoperability, service providers need to spend time developing standardized protocols, which can be expensive and time-consuming. Moreover, there are scalability concerns. Blockchain networks, especially public ones, may face limitations in handling a high volume of transactions efficiently. Delays and higher expenses may result from this, especially in service industries where several quick transactions are necessary.


Why developer productivity isn’t all about tooling and AI

Creative work requires some degree of isolation. Each time they sit down to code, developers build up context for what they’re doing in their head; they play a game with their imagination where they’re slotting their next line of code into the larger picture of their project so everything fits together. Imagine you’re holding all this context in your head — and then someone pings you on Slack with a small request. All the context you’ve built up collapses in that instant. It takes time to reorient yourself. It’s like trying to sleep and getting woken up every hour. ... Another factor that gets in the way of developer productivity is a lack of clarity on what engineers are supposed to be doing. If developers have to spend time trying to figure out the requirements of what they’re building while they’re building it, they’re ultimately doing two types of work: Prioritization and coding. These disparate types of work don’t mesh. Figuring out what to build requires conversations with users, extensive research, talks with stakeholders across the organization and other tasks well outside the scope of software development. 


Here’s What a Software Architect Does in an Agile Team

An architect is probably not a valid role on an agile team. I admit I have at times been overzealous with non-coding members of a dev team. The less militant version of this is to be aware of ‘pigs’ and ‘chickens’ in the agile sense. When making breakfast, chickens lay eggs but pigs literally have skin in the game. So only pigs should attend daily agile stand ups. There are three problems with the role of architect in classic agile. Think of these as Lutheran protestant theses nailed to the door — or more likely to the planning wall.There are no upfront design phases in agile. “The best architectures, requirements, and designs emerge from self-organizing teams”. An architect cannot be an approver and cause of delay. This leads to the idea that architectural know-how should be spread out amongst the other team members. This is often the case — however it elides the fact that architectural responsibility doesn’t fall to anyone, even if people feel they may be accountable. Remember your RACI matrix. Should all agile developers be architects in a project? This makes little sense, since architecture describes a singular plan. 


AI’s Ability to Reason: Statistics vs. Logic

As a simplistic existence proof that today’s AI does not reason with logic, consider the following problem in basic algebra which was given to Bing/OpenAI GPT to solve. The gist of the problem shown in the figure below is that there are two rectangles, each having the same height (though this detail is not clearly stated in the sourcing 6th grade math text) but different widths. Areas for each are given. The rectangles are positioned in the corresponding math text to suggest that they may be aggregated into a larger rectangle having a width that is the sum of the widths of the smaller rectangles — maybe as a hint toward length. The request to find the length (height) and widths is a test to see whether OpenAI’s GPT via Bing would determine if there are sufficient equations matching unknowns. There aren’t. GPT didn’t discover the number of equations is one too few. Instead, it attempted to find length and widths, and it responded suggesting it had successfully solved the math problem. Everything started to go amuck when the insufficiency of the number of equations matched to the number of unknowns was missed, and the third equation given above simply is a function of the other two.


Security Is EVERYBODY’s Business, But CISOs Need to Lead

Cybersecurity is not an audit or internal audit. There is a fine line of difference there. And as much as the CISO is seen as somewhat more of an enforcer, they need to be seen as an enabler to the business. CISOs need to have very direct, effective, and transparent communication with the board members when it comes to quantification of everything that they’re doing. And when I say quantification, what I mean is quantification of risks to the organization. Some of the board members will be closer to cybersecurity risks. Some of them may be closer to a reputational risk or a financial risk. But if a CSO can stitch that story together and quantify it for the audience of the board, I think that goes a long way. That’s what’s needed because, in the situation in the market that we are in right now, with the threat landscape changing, with new capabilities coming into play, I think it’s critical. CISOs need to ensure the message is articulated well in the boardroom.


How Agile Managers Use Uncertainty to Create Better Decisions Faster

Here's the problem I see with big, long-term, and final management decisions: the decision is too large to have any certainty at all. Remember I said I don't take long consulting engagements? Early in my consulting career, I learned that even a “guaranteed” consulting project was not a guarantee at all. Sure, the client might pay a kill fee (a portion of the unused project budget), but most of the time, the client said (on a Friday afternoon), “Thanks. The world has changed. Don't come back on Monday.” While I always continued my marketing so my business would survive, I felt as if the clients cheated themselves. Because we thought we had more time, we didn't create smaller goals and achieve them. Our work was incomplete—according to their goals. And that's what people remember. Not that they changed the circumstances, but that we didn't finish. That's exactly what happens when managers try to decide for a long time without revisiting their decisions. The world changes. If the world changes enough, the managers feel the need to lay people off, not just stop efforts. Those layoffs are a result of too-long and too-large management decisions.


Technical and digital debt can devastate your digital ambition

Of course, no organisation can afford zero technical debt (is this even possible?). The judgement here is targeting existing technical debt in a priority order. Deciding what not to do is just as important as what to do. You will be better able to manage the high expectations of stakeholders, shape the transformation and prioritise investment when you have this insight. Ask yourself these questions:What technical debt will act as an anchor when trying to increase the pace of change, irrespective of how fast your new IT engineering and product-based approaches to change are? Or to put it another way, which single piece of technical debt will limit the flow of value, irrespective of how slick everything else is? To be able to adapt at pace, at short notice, responding to market opportunities, where is your underlying technology strong but resistant to change? Customers just expect your digital channels to work; where must you improve the reliability of your service? Where can you increase cost effectiveness or risk mitigation through targeted automation as one of the treatment strategies available to you?


How AI and Crypto is Transforming the Future of Decentralized Finance

As time has passed, the crypto industry has evolved into a breeding ground for fraudulent activities and deception. Safeguarding investors from fraud has become increasingly vital, especially with the influx of initial coin offerings and new platforms entering the market. The encouraging news is that AI and crypto can effectively prevent fraud attempts and ensure that investors adhere to financial compliance. AI bots, for example, can detect and flag fraudulent transactions, preventing them from proceeding unless confirmed by a human. Confirming crypto transactions often takes up to 24 hours due to reliance on consensus methods. However, cases of transaction delays often pose a challenge for the crypto sector. With some recent advancements in AI technology, there have been some enhanced trade management options. Some companies are adopting innovative consensus methods that significantly reduce transaction times to just a few seconds. This improvement holds potential benefits for the 


Secure Together: ATO Defense for Businesses and Consumers

First off, businesses need to take the lead in forming a stronger partnership with their customers. This means educating both customers and employees on proper security measures. Websites operating with user accounts, engaging individuals and corporations, often find themselves in the crosshairs of swindlers intent on ATO. We mentioned above that phishing is a common tactic. It’s imperative to consistently enlighten customers and employees about the looming menace of online security breaches like phishing, including how phishing attempts trick people and tips for not getting tangled. Adopt a vigilant stance on security by ingraining robust preventive protocols, including routine password updates and providing guidelines for safeguarding user credentials. ... Training does not end there. The MGM Resorts cyberattack we cited above also involved a fraudster tricking a customer support help desk. Businesses must train their staff on how to stop these attempted breaches — for example, by knowing how to ask questions that only a legitimate account holder could know the answer to.



Quote for the day:

"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley

Daily Tech Digest - November 11, 2023

Mika becomes world's first robot CEO

In the era where many workers are worrying about artificial intelligence (AI) replacing their jobs, one company has announced that it is hiring the first humanoid robot chief executive officer (CEO). Dictador, a spirit brand based in Colombia’s Cartagena, has gone viral for appointing Mika, who is manifested as a robot. Mika is a research project between Hanson Robotics and Dictador. It has been customised to represent company value. Hanson Robotics also created Sophia, the popular humanoid robot. ... At a recent event, Mika said, “My presence on this stage is purely symbolic. In reality, conferring an honorary professor title upon me is a tribute to the greatness of the human mind in which the idea of artificial intelligence was born. It is also a recognition of the courage and open-mindedness of the owner of Dictador, who entrusted his company to a humble spokesperson with a processor instead of a heart.” Emphasising on how she is better than current CEO’s including Musk and Zuckerberg, she said, “In reality the notion of two powerful tech bosses having a cage fight is not a solution for improving the efficiency of their platforms”. 


Four Recommendations to Improve the Cyber Resilience Act

Policymakers must take a more proportionate, risk-based approach to determining the risk level of a product with digital elements and offer greater certainty for manufacturers to ascertain if a product is a critical one. While the Commission’s original proposal categorised every product in several broad categories as critical, the co-legislators have now the opportunity to take a more sophisticated approach. We recommend leveraging the Council’s risked-based approach with some key amendments, outlined here. ... it is crucial that the reporting obligations are aligned with the NIS 2 Directive to streamline reporting requirements and to avoid an unmanageable reporting burden for manufacturers and responsible authorities. This means that reporting under should be made to the CSIRTs under a single distributed reporting platform, and the incident reporting on security incidents should only concern “significant incidents”, as outlined in the European Parliament’s text.


What is a digital transformation strategy? Everything you need to know

At its most basic level, a DX strategy is the use of digital technologies to create or reimagine how customers are served and how work gets done. A well-thought-out and well-crafted digital transformation strategy ensures an organization correctly identifies what products, services and work need to be created or reimagined to remain competitive. For nonprofits or government agencies, this might mean effectively and efficiently delivering on their missions. ... A thoughtful DX strategy also focuses the organization's attention, said Kamales Lardi, author of The Human Side of Digital Business Transformation and CEO of Lardi & Partner Consulting. More specifically, it focuses the organization on the most pressing digital initiatives -- those that deliver value toward meeting its enterprise-wide goals. Lardi said this approach keeps teams from pursuing initiatives that introduce new technologies without understanding how they'll deliver value or implementing transformation projects that only help segments of the enterprise.


SolarWinds Fires Back at SEC Fraud Charges

“We categorically deny those allegations,” SolarWinds’ blog post said. “The company had appropriate controls in place before SUNBURST. The SEC misleadingly quotes snippets of documents and conversations out of context to patch together a false narrative about our security posture.” SolarWinds’ blog post details what it says are false claims that the attack exploited a VPN vulnerability. Other technical issues regarding the companies’ compliance with National Institute of Standards and Technology (NIST) cybersecurity standards framework (CSF) are also defended in the post. “The SEC is mixing apples and oranges, underscoring its lack of cybersecurity experience,” the blog post charged. “… the SEC fundamentally misunderstands what it means to follow the NIST CSF.” However much of the SEC’s complaint focuses on Brown’s alleged mishandling of controls that led to the breach. SEC contends that Brown in 2018 and 2019 stated "the current state of security leaves us in a very vulnerable state for our critical assets," and that "access and privilege to critical systems/data is inappropriate."


Software Architecture Fundamentals: Building the Foundations of Robust Systems

Solutions architecture is the bridge between business requirements and software solutions. Architects in this domain transform business needs into comprehensive software designs, often through diagrammatic representations. They also evaluate the commercial impacts of various technology choices. Software architecture, the centerpiece of our discussion, is closely aligned with software development. It not only impacts the structural composition of software but also influences the organization’s structure. Software architects play a pivotal role in translating business objectives into concrete software components and their responsibilities, all while ensuring the system’s healthy evolution over time. ... In a distributed architecture, systems must adopt self-preservation mechanisms:Avoid overloading a failing system. Excessive requests to a struggling system can exacerbate the situation. Recognize that a slow system is often worse than an offline system in terms of user experience. A system should have a way to assess its health. 


Building resilience-focused organizations

Arguably, the most important aspect of building resilient software system is automation. It effectively reduces human error, speeds up repetitive tasks, and guarantees consistent configurations. Through the automation of deployment, monitoring, and scaling processes, software systems can quickly adapt to evolving conditions and recover from failures more efficiently. In order to automate build commands, Amazon created a centralized, hosted build system called Brazil. The main functions of Brazil are compiling, versioning, and dependency management, with a focus on build reproducibility. Brazil executes a series of commands to generate artifacts that can be stored and then deployed. To deploy these artifacts, Apollo was created. Apollo was developed to reliably deploy a specified set of software artifacts across a fleet of instances. Developers define the process for a single host, and Apollo coordinates that update across the entire fleet of hosts. Developers could simply push-deploy their application to development, staging, and production environments. No logging into the host, no commands to run. 


What Are Data Sharing Agreements and Why Are They Important?

Before establishing data sharing agreements, it is crucial to have a clear understanding of their purpose and scope. These agreements serve as legal documents that outline the terms, conditions, and responsibilities of all parties involved in sharing data. By comprehending the purpose and scope, organizations can ensure that they establish agreements that effectively protect their interests and meet their objectives. The purpose of data sharing agreements is multifaceted. ... Several key factors must be considered: Data protection laws: Organizations must comply with data protection laws that govern the collection, storage, and sharing of personal information. Intellectual property rights: Data sharing agreements should address ownership rights of the shared data, including any intellectual property rights associated with it. Clear provisions on how the data can be used, reproduced, or modified should be included. Confidentiality and security: Agreements should outline measures to protect the confidentiality and security of shared data. This includes provisions for encryption, access controls, breach notification procedures, and liability for any breaches. 


Cyberattack Forces San Diego Hospital to Divert Patients

The attack on Tri-City Medical is among a rash of similarly disruptive ransomware and other cyber incidents that have been relentlessly hitting healthcare sector entities, including regional hospitals, in recent years, months and weeks. That includes an October ransomware attack on five hospitals in Ontario, Canada, and their shared IT services provider, which has been disrupting patient care at the facilities for several weeks and for which recovery work is expected to last into mid-December (see: Ontario Hospitals Expect Monthlong Ransomware Recovery). The Canadian hospitals have been directing many patients, including some cancer patients who need radiology treatment, to seek medical care elsewhere (see: 5 Ontario Hospitals Still Reeling From Ransomware Attack). A study released in January by the Ponemon Institute surveying 579 healthcare technology and security leaders says that patient care diversions due to ransomware are on the rise.
ther facilities, up from 65% the year before.


Sure, real-time data is now 'democratized,' but it's only a start

"With platforms taking complexity away from the individual user or engineer, it has accelerated adoption across the industry. Innovation such as SQL support, help make it democratized and provide ease of access to the vast majority rather than a select few." ... Many companies' infrastructures aren't ready, and neither are the organizations themselves. "Some yet to understand or see the value of real-time while others are all-in, with solutions that were designed for streaming throughout the organization," says Raikmo. "Combining datasets in motion with advanced techniques such as watermarking and windowing, is not a trivial matter. It requires correlating multiple streams, combining the data in memory and producing merged stateful result sets, at enterprise scale and resilience." The good news is not every bit of data needs to be streaming or delivered in real time. "Organizations often fall into the trap of investing in resources to make every data point they visualize be in real time, even when it is not necessary," Jayaprakash points out. "However, this approach can lead to exorbitant costs and become unsustainable."


AI is the future of cybersecurity. This is how to adopt it securely

Used effectively, AI can help prevent vulnerabilities from being written in the first place—radically transforming the security experience. AI provides context for potential vulnerabilities and secure code suggestions from the start (though please still test AI-produced code). These capabilities enable developers to write more secure code in real time and finally realize the true promise of “shift left.” This is revolutionary. Traditionally, “shift left” typically meant getting security feedback after you’ve brought your idea to code, but before deploying it to production. But with AI, security is truly built in, not bolted on. There’s no further way to “shift left” than doing so in the very place where your developers are bringing their ideas to code, with their AI pair programmer helping them along the way. It’s an exciting new era where generative AI will be on the front line of cyber defense. However, it’s also important to note that, in the same way that AI won’t replace developers, AI won’t replace the need for security teams. We’re not at Level 5 self-driving just yet. 



Quote for the day:

"Nobody can go back and start a new beginning, but anyone can start today and make a new beginning." -- Maria Robinson

Daily Tech Digest - November 10, 2023

The promise of collective superintelligence

The goal is not to replace human intellect, but to amplify it by connecting large groups of people into superintelligent systems that can solve problems no individual could solve on their own, while also ensuring that human values, morals and interests are inherent at every level. This might sound unnatural, but it’s a common step in the evolution of many social species. Biologists call the phenomenon Swarm Intelligence and it enables schools of fish, swarms of bees and flocks of birds to skillfully navigate their world without any individual being in charge. They don’t do this by taking votes or polls the way human groups make decisions. Instead, they form real-time interactive systems that push and pull on the decision-space and converge on optimized solutions. ... Can we enable conversational swarms in humans? It turns out, we can by using a concept developed in 2018 called hyperswarms that divides real-time human groups into overlapping subgroups. ... Of course, enabling parallel groups is not enough to create a Swarm Intelligence. That’s because information needs to propagate across the population. This was solved using AI agents to emulate the function of the lateral line organ in fish.


There's Only One Way to Solve the Cybersecurity Skills Gap

The plain truth is that it's not just a numbers game. Many of these roles are considered "hard to fill" because they are for specialist skill sets such as forensic analysis, security architecture, interpreting malicious code, or penetration testing. Or they're for senior roles with three to six years' experience. Even if companies recruit people with high potential but not the requisite background, it will take years for these recruits to upskill to reach a sufficient standard. Moreover, if we throw open the gates completely, we risk diluting the industry by introducing a whole swath of people with no technical skills. Yes, soft skills are valuable and in short supply too, but relying on these alone to fill the workforce gap does nothing to address the problem businesses have: a lack of trained, competent cybersecurity professionals, resulting, once again, in less resilience. Another major hurdle is that many organizations are reluctant to invest in training because the job market is so volatile. There's a fear that, by investing in new recruits, those staff members will become a flight risk and put themselves back into that talent pool. 


The Struggle for Microservice Integration Testing

Integration testing is crucial for microservices architectures. It validates the interactions between different services and components, and you can’t successfully run a large architecture of isolated microservices without integration testing. In a microservices setup, each service is designed to perform a specific function and often relies on other services to fulfill a complete user request. While unit tests ensure that individual services function as expected in isolation, they don’t test the system’s behavior when services communicate with each other. Integration tests fill this gap by simulating real-world scenarios where multiple services interact, helping to catch issues like data inconsistencies, network latency and fault tolerance early in the development cycle. Integration testing provides a safety net for CI/CD pipelines. Without comprehensive integration tests, it’s easy for automated deployments to introduce regressions that affect the system’s overall behavior. By automating these tests, you can ensure that new code changes don’t disrupt existing functionalities and that the system remains robust and scalable.


Google Cloud’s Cybersecurity Trends to Watch in 2024 Include Generative AI-Based Attacks

Threat actors will use generative AI and large language models in phishing and other social engineering scams, Google Cloud predicted. Because generative AI can create natural-sounding content, employees may struggle to identify scam emails through poor grammar or spam calls through robotic-sounding voices. Attackers could use generative AI to create fake news or fake content, Google Cloudwarned. LLMs and generative AI “will be increasingly offered in underground forums as a paid service, and used for various purposes such as phishing campaigns and spreading disinformation,” Google Cloud wrote. On the other hand, defenders can use generative AI in threat intelligence and data analysis. Generative AI could allow defenders to take action at greater speeds and scales, even when digesting very large amounts of data. “AI is already providing a tremendous advantage for our cyber defenders, enabling them to improve capabilities, reduce toil and better protect against threats,” said Phil Venables, chief information security officer at Google Cloud, in an email to TechRepublic.


OpenAI’s gen AI updates threaten the survival of many open source firms

The new API, according to OpenAI, is expected to provide new capabilities including a Code Interpreter, Retrieval Augmented Generation (RAG), and function calling to handle “heavy lifting” that would previously require developer expertise in order to build AI-driven applications. The Assistants API, specifically, may cause revenue losses for open source companies including LangChain, LLamaIndex, and ChromaDB, according to Andy Thurai, principal analyst at Constellation Research. “For organizations that want to standardize on OpenAI, the more their platform offers, the less organizations will need other frameworks such as Langchain and LlamaIndex. The new updates allow developers to create their applications within a single framework,” said David Menninger, executive director at Ventana Research. However, he pointed out that until the new features, such as the new API, are made generally available, enterprises will continue to put applications into production by relying on existing open source frameworks.


When net-zero goals meet harsh realities

There is a move towards greater precision and accountability at the non-governmental level, too. The principles of carbon emission measurement and reporting that underpin, for example, all corporate net-zero objectives tend to be agreed upon internationally by institutions such as the World Resources Institute and the World Business Council for Sustainable Development; in turn, these are used by bodies such as the SBTi and the CDP. Here too, standards are being rewritten, so that, for example, the use of carbon offsets is becoming less acceptable, forcing operators to buy carbon-free energy directly. With all these developments under way, there is a startling disconnect between many of the public commitments by countries and companies, and what most digital infrastructure organizations are currently doing or are able to do. ... The difference between the two surveys highlights a second disconnect. IBM’s findings, based on responses from senior IT and sustainability staff, show a much higher proportion of organizations collecting carbon emission data than Uptime’s.


CISOs Beware: SEC's SolarWinds Action Shows They're Scapegoating Us

The SEC had been trying to create accountability by holding a board accountable and liable for issues concerning cybersecurity incidents that inevitably occur from time to time. But now, in the case of SolarWinds, the SEC has turned around and directly gone after somebody who's only now the CISO. Brown wasn't the CISO when the breaches happened. He had been SolarWinds' VP of security and architecture and head of its information security group between July 2017 and December 2020, and he stepped into the role of CISO in January 2021. The result of the SEC's failure to mandate security leadership on corporate boards is that they've resorted to holding the CISO liable. This shift underscores a significant transformation in the CISO landscape. From my perspective as a CISO, it's increasingly clear that technical security expertise is an essential requirement for the role. Each day, CISOs are tasked with making critical decisions, such as approving or accepting timeline adjustments for security risks that have the potential to be exploited. 


Security in the impending age of quantum computers

The timeline for developing a cryptographically relevant quantum computer is highly contested, with estimates often ranging between 5 and 15 years. Although the date when such a quantum computer exists remains in the future, this does not mean this is a problem for future CIOs and IT professionals. The threat is live today due to the threat of “harvest now, decrypt later” attacks, whereby an adversary stores encrypted communications and data gleaned through classical cyberattacks and waits until a cryptographically relevant quantum computer is available to decrypt the information. To further highlight this threat, the encrypted data could be decrypted long before a cryptographically relevant quantum computer is available if the data is secured via weak encryption keys. While some data clearly loses its value in the short term, social security numbers, health and financial data, national security information, and intellectual property retain value for decades and the decryption of such data on a large scale could be catastrophic for governments and companies alike.


How the Online Safety Act will impact businesses beyond Big Tech

The requirements that apply to all regulated services, including those outside the special categories, are naturally the least onerous under the Act; however, because these still introduce new legal obligations, for many businesses these will require considering compliance through a new lens. ... Regulated services will have to conduct certain risk assessments at defined intervals. The type of risk assessments a service provider must conduct depends on the nature and users of the service.Illegal content assessment: all providers of regulated services must conduct a risk assessment of how likely users are to encounter and be harmed by illegal content, taking into account a range of factors including user base, design and functionalities of the service and its recommender systems, and the nature and severity of harm that individuals might suffer due to this content. ... all regulated services must carry out an assessment of whether the service is likely to be accessed by children, and if so they must carry out a children’s risk assessment of how likely children are to encounter and be harmed by content on the site, giving separate consideration to children in different age groups.


Enterprises vs. The Next-Generation of Hackers – Who’s Winning the AI Race?

Amidst a push for responsible AI development, major players in the space are on a mission to secure their tools from malicious use but bad actors have already started to take advantage of the same tech to boost their skill sets. Enterprises are increasingly finding new ways to integrate AI into internal workflows and external offerings, which in turn has created a new attack vector for hackers. This expanded surface has opened the door for a new wave of sophisticated attacks using advanced methods and unsuspecting entry points that enterprises previously didn’t have to secure against. ... Today’s threat landscape is transforming — hackers have tools at their fingertips that can rapidly advance their impact and an entirely new attack vector to explore. With growing enterprise use of AI offering an opportunity to expedite attacks, now is the time to focus on transforming security defenses. ... Despite scrutiny for its ability to equip cybercriminals with more advanced techniques, AI models can be used just as effectively among security and IT teams to mitigate these mounting threats. 



Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer