Daily Tech Digest - July 03, 2024

“The artificial intelligence (AI) boom across all industries has fueled anxiety in the workforce, with employees fearing ethical usage, legal risks and job displacement,” EY said in its report. The future of work has shifted due to genAI in particular, enabling work to be done equally well and securely across remote, field, and office environments, according to EY. ... “We can say that the most common worry is that AI will impact an employee’s role – either making it obsolete entirely or changing it in a way which concerns the employee, For example, taking some of the challenge or excitement out of it,” Harris said. “And the point is, these perspectives are already having an impact – irrespective of what the future really holds.” Harris said in another Gartner survey, employees indicated they were less likely to stay with an organization due to concerns about AI-driven job loss. ... Organizations can also overcome employee AI fears and build trust by offering training or development on a range of topics, such as how AI works, how to create prompts and effectively use AI, and even how to evaluate AI output for biases or inaccuracies. And employees want to learn.


Importance of security-by-design for IT and OT systems in building a security resilient framework

Regular security testing, vulnerability assessments, penetration testing, and compliance audits are vital for identifying vulnerabilities and potential attack vectors. This proactive approach allows organizations to rectify weaknesses, enhance security posture, and protect their systems effectively. For OT systems, specialized testing methods that support the unique requirements of industrial environments are necessary. ... Developers should adopt secure coding practices, including coding standards, input validation, secure data storage, secure communication protocols, code reviews, and automated security testing. These measures help identify and mitigate security issues during the development phase, eliminating common vulnerabilities. Additionally, training developers in secure coding techniques and fostering a security-centric culture within development teams are equally crucial. ... Regular software updates and effective patch management are essential to address newly identified security vulnerabilities. Staying current with security patches and updates for all software components is crucial. 


The Case For a Managed Career for Architects

The case for making architecture a managed profession stems from a few critical factors: Rising Levels of Societal Impact: The impact of technology is growing daily. This impact and difficulties of technology, not just threats but the daily interaction of people with technology like subscription models, social media, passwords, banking etc, is increasingly important to the average person. ... Regulatory Pressure: Increasing pressure is coming to bear on all aspects of technology as it relates to government and regulation. From things like sustainability, privacy and accountability to impacts to purchasing, monopolies, identity and security. The more prevalent technology becomes in society, the more regulation that needs to be met to ensure appropriate use. ... New Technology Opportunities/Threats: Avoiding catastrophes in both small and large scopes is one function of modern professions. Non-professionals are not allowed to play with dangerous research or deploy dangerous products in most fields. ... Severe Demand/Quality Problems: The demand for high-quality architecture professionals is growing daily. This demand can no longer be met in the role-based education methods that were developed in the early rush of the 90’s. 


Is your bank’s architecture trapped in the past? It’s time to recompose it

The complexity of banking modernisation, particularly the cost and resource intensiveness associated with a big-bang approach, is one of several reasons many banks may still be stuck with a legacy technology platform. With contemporary architectural techniques such as the “strangulation pattern”, banks can achieve the desired modernisation in a streamlined manner. The strangulation pattern is a software migration strategy involving forming a new software layer, the “strangler”, around the legacy banking system. This strangler interacts with the core system’s data and functionality through well-defined APIs. Gradually, new functionalities are developed within the strangler layer in parallel with the legacy systems, allowing the bank to independently test and refine the new functionalities. Over time, more and more functionalities, based on needs and complexity, are migrated from the core system to the strangler layer. As a result, the core system becomes less and less critical and can be retired entirely or kept as a backup system. Not only does this approach minimise risk compared to a big-bank switchover, but it also allows business operations to continue with minimal disruption. 


Cyberinsurance Premiums are Going Down: Here’s Why and What to Expect

The insurance cycle is described in Wikipedia as “a term describing the tendency of the insurance industry to swing between profitable and unprofitable periods over time…” Such swings are common to all businesses but are particularly relevant to insurance. Within this insurance cycle, the swing is between a ‘hard market’ and a ‘soft market’. Howden defines it thus: “In simple terms, [a soft market] is when there is a lot of insurance capacity, and rates are low. Conversely, a hard market is when insurance capacity is reduced and premium rates are high.” Noticeably, the state of the insured does not figure. “Insurance markets (cyber, property, D&O, etc) tend to run through rating cycles,” explains George Mawdsley, head of risk solutions at DeNexus. “What makes cyber unique is that there is material uncertainty around how big the ‘Big Storms’ can get, which means capital allocators will make conservative assumptions on max downside or will not invest. Given the strong growth projections (demand) for the cyber insurance market, we expect this dynamic to drive up prices over the long term.”


Productivity and patience: How GitHub Copilot is expanding development horizons

Copilot shines in "implementing straightforward, well-defined components in terms of performance and other non-functional aspects. Its efficiency diminishes when addressing complex bugs or tasks requiring deep domain expertise." ... Copilot's greatest challenge is context, he pointed out. "Code and code development has a lot to do with the context that you're dealing with. Are you in a legacy code base or not? Are you in COBOL or in C++ or in JavaScript or TypeScript? It's a lot of context that needs to happen for the quality of that code to be high and for you to accept it." ... The impact on software development from AI will be subtler: "What if a text box is all they needed to be able to accomplish something that creates software and something that they could then derive value from?" For example, said Rodriguez: "If I could say very quickly in my phone, 'Hey, I am thinking of talking to my daughter about these things. Can you give me the last three X, Y, and Z articles and then just create a little program that we could play as a game?' You could envision Copilot being able to help you with that in the future."


How Part-Time Senior Leaders Can Help Your Business

It’s not only CEOs that benefit. With their deep functional expertise, fractionals often serve as advisors and mentors to other C-suite leaders. Barry Hurd, a fractional chief marketing officer (CMO), describes his role as providing expert counsel to full-time CMOs: “I’ve worked with a couple of CMOs who have hired me to simply double-check their work. I act as the executive coach, bringing my 30 years of wisdom and experience.” Similarly, Katie Walter, another fractional CMO, shares an experience where she supported an executive transitioning into a marketing leadership role: “She had never led the marketing function before, so the expectation was that I would work alongside her and help her to become more effective. In this case, I was introduced to the team as her coach.” The benefits also extend to the organization as a whole. Because fractional leaders often juggle multiple roles, they gain access to a wide professional network and are exposed to diverse working methods. This unique position allows them to introduce new ideas and practices among the organizations they serve. 


How the CISO Can Transform Into a True Cyber Hero

Operationalizing readiness, response, and recovery is where the rubber meets the road for the CISO. Plans, processes, and technologies underpin operations, but they each rely on people. Tabletop exercises that focus only on technical response activities strengthen only one "muscle group" of the organization. Consider a different kind of cyber exercise — a war game that involves the entire organization. By exercising the incident management plan with a broader constituency of stakeholders, organizations can build "muscle memory," test communication channels, and identify decisions or risks based on a given scenario. As part of the war game, the recovery team should run through the sequential restoration. By socializing the order in which operations will return after a disruption, the team can reduce the number of "Is it back online yet?" queries received during a real incident.  ... There's an old joke that "CISO" stands for "career is seriously over." But today’s CISO has a serious role to play as a hero for their organization. It is a simple matter of evolving from a primarily technical role to a role that incorporates empowering their human peers and stakeholders to become greater collaborators in cyber-incident response, recovery, and readiness.


How CISOs can protect their personal liability

One of the most effective and methodical methods of documentation that a CISO can maintain is a risk register that identifies existing cyber risk and records risk acceptance by relevant business stakeholders. This can help bring greater visibility into cyber risk to the board and it certainly helps CISOs to protect themselves. “In order to run a security program, you have to have a risk register. It’s like table stakes,” says Greg Notch CISO of Expel, a managed detection and response firm, and a longtime security veteran who served as CISO for the National Hockey League prior to this job. ... Even with rock solid policies, procedures, and documentation, CISOs should also seek to establish legal protection through tools like indemnification agreements, employment contractual terms, and the right level of insurance protection. Kolochenko says CISOs that are unsure of their protections should proactively reach out to their general counsel and ask them about all of their duties, liabilities, and protections. If something sounds unfavorable, push back, he says.


How New Frameworks for Cyber Metrics are Reshaping Boardroom Conversations

Ideally, boards have one or more sitting executives with risk experience, but the reality is that boards primarily consist of executives with a non-technical understanding of risk management methods. Risk and cybersecurity information must be always conveyed in easy-to-understand, business-oriented language. Start by quantifying risk in monetary or dollar terms. Board members may not understand the technical details of Monte Carlo simulations or probabilistic risk assessments, but they do need to understand the potential impact of risk on the business in the most efficient way. Quantification can help anyone understand how the business anticipates risk, prioritizes risk controls, and takes preventative action against risk. Tailor risk information to board members, depending on their expertise and the board report’s purpose. There is no one-size-fits-all approach to reporting. CISOs can segregate risk metrics into categories, like security, financial, third-party, or employee awareness risks. Grouping information together helps non-technical executives understand how risks are interconnected and what’s being done to anticipate these risks. 



Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale

Daily Tech Digest - July 02, 2024

The Changing Role of the Chief Data Officer

The chief data officer originally played more “defense” than “offense.” The position focused on data security, fraud protection, and Data Governance, and tended to attract people from a technical or legal background. CDOs now may take on a more offensive strategy, proactively finding ways to extract value from the data for the benefit of the wider business, and may come from an analytics or business background. Of course, in reality, the choice between offense and defense is a false one, as companies must do both. ... Major trends for CDOs in the future will include incorporating cutting-edge technology, such as generative AI, large language models, machine learning, and increasingly sophisticated forms of automation. The role is also spreading to a wider variety of industry sectors, such as healthcare, the private sector, and higher education. One of the major challenges is already in progress: responding to the COVID-19 pandemic. The pandemic hugely shook global supply chains, created new business markets, and also radically changed the nature of business itself. 


Duplicate Tech: A Bottom-Line Issue Worth Resolving

The patchwork nature of combined technologies can hinder processes and cause data fragmentation or loss. Moreover, differing cybersecurity capabilities among technologies can expose the organization to increased risk of cyberattacks, as older or less secure systems may be more vulnerable to breaches. Retaining multiple technologies may initially seem prudent in a merger or acquisition, but ultimately it proves detrimental. The drawbacks — from duplicated data and disconnected processes to inefficiencies and security vulnerabilities — far outweigh any perceived benefits, highlighting the critical need for streamlined, unified IT systems. ... There are compelling reasons to remove the dead weight of duplicate technologies and adopt a singular technology. The first step in eliminating tech redundancy is to evaluate existing technologies to determine which tools best align with current and future business needs. A collaborative approach with all relevant stakeholders is recommended to ensure the chosen solution supports organizational goals and avoids unnecessary repetition.


Disability community has long wrestled with 'helpful' technologies—lessons for everyone in dealing with AI

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone. This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles. ... Disability advocates have long battled this type of well-meaning but intrusive assistance—for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control. The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over. A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. 


What is the Role of Explainable AI (XAI) In Security?

XAI in cybersecurity is like a colleague who never stops working. While AI helps automatically detect and respond to rapidly evolving threats, XAI helps security professionals understand how these decisions are being made. “Explainable AI sheds light on the inner workings of AI models, making them transparent and trustworthy. Revealing the why behind the models’ predictions, XAI empowers the analysts to make informed decisions. It also enables fast adaptation by exposing insights that lead to quick fine-tuning or new strategies in the face of advanced threats. And most importantly, XAI facilitates collaboration between humans and AI, creating a context in which human intuition complements computational power.,” Kolcsár added. ... With XAI working behind the scenes, security teams can quickly discover the root cause of a security alert and initiate a more targeted response, minimizing the overall damage caused by an attack and limiting resource wastage. As transparency allows security professionals to understand how AI models adapt to rapidly evolving threats, they can also ensure that security measures are consistently effective. 


10 ways AI can make IT more productive

By infusing AI into business processes, enterprises can achieve levels of productivity, efficiency, consistency, and scale that were unimaginable a decade ago, says Jim Liddle, CIO at hybrid cloud storage provider Nasuni. He observes that mundane repetitive tasks, such as data entry and collection, can be easily handled 24/7 by intelligent AI algorithms. “Complex business decisions, such as fraud detection and price optimization, can now be made in real-time based on huge amounts of data,” Liddle states. “Workflows that spanned days or weeks can now be completed in hours or minutes.”  “Enterprises have long sought to drive efficiency and scale through automation, first with simple programmatic rules-based systems and later with more advanced algorithmic software,” Liddle says.  ... “By reducing boilerplating, teams can save time on repetitive tasks while automated and enhanced documentation keeps pace with code changes and project developments.” He notes that AI can also automatically create pull requests and integrate with project management software. Additionally, AI can generate suggestions to resolve bugs, propose new features, and improve code reviews.


How Tomorrow's Smart Cities Will Think For Themselves

When creating a cognitive city, the fundamental need is to move the computing power to where data is generated: where people live, work and travel. That applies whether you’re building a totally new smart city or retrofitting technology to a pre-existing ‘brownfield’ city. Either way, edge is key here. You’re dealing with information from sensors in rubbish bins, drains, and cameras in traffic lights. ... But in years to come the city itself will respond dynamically to the changing physical world, adjusting energy use in real-time to respond to the weather, for example. The evolution of monitoring has come from a machine-to-machine foundation, with the introduction of the Internet of Things (IoT) and now artificial intelligence (AI) becoming transformational in enabling smart technologies to become dynamic. Emerging AI technologies such as large language models will also play a role going forward, making it easy for both city planners and ordinary citizens to interact with the city they live in. Edge will be the key ingredient which gives us effective control of these cities of the future.


Serverless cloud technology fades away

The meaning of serverless computing became diluted over time. Originally coined to describe a model where developers could run code without provisioning or managing servers, it has since been applied to a wide range of services that do not fit its original definition. This led to a confusing loss of precision. It’s crucial to focus on the functional characteristics of serverless computing. The elements of serverless—agility, cost-efficiency, and the ability to rapidly deploy and scale applications—remain valuable. It’s important to concentrate on how these characteristics contribute to achieving business goals rather than becoming fixated on the specific technologies in use. Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds. ... The explosion of generative AI also contributed to the shifting landscape. Cloud providers are deeply invested in enabling AI-driven solutions, which often require specialized computer resources and significant data management capabilities, areas where traditional serverless models may not always excel.


Infrastructure-as-code and its game-changing impact on rapid solutions development

Automation is one of the main benefits of adopting an IaC approach. By automating infrastructure provisioning, IaC allows configuration to be accomplished at a faster pace. Automation also reduces the risk of errors that can result from manual coding, empowering greater consistency by standardizing the development and deployment of the infrastructure. ... Developers can rapidly assemble and deploy its infrastructure blocks, reusing them as needed throughout the development process. When adjustments are needed, developers can simply update the code the blocks are built on rather than making manual one-off changes to infrastructure components. Testing and tracking are more streamlined with IaC since the IaC code serves as a centralized and readily accessible source for documentation on the infrastructure. It also streamlines the testing process, allowing for automated unit testing of compliance, validation, and other processes before deploying. Additionally, IaC empowers developers to take advantage of the benefits provided by cloud computing. It facilitates direct interaction with the cloud’s exposed API, allowing developers to dynamically provision, manage, and orchestrate resources.


What is Multimodal AI? Here’s Everything You Need to Know

Multimodal AI describes artificial intelligence systems that can simultaneously process and interpret data from various sources such as text, images, audio, and video. Unlike traditional AI models that depend on a single type of data, multimodal AI provides a holistic approach to data processing. ... Although multimodal AI and generative AI share similarities, they differ fundamentally. For instance, generative AI focuses on creating new content from a single type of prompt, such as creating images from textual descriptions. In contrast, multimodal AI processes and understands different sensory inputs, allowing users to input various data types and receive multimodal outputs. ... Multimodal AI represents a significant advancement in the field of artificial intelligence. Therefore, by understanding and leveraging this advanced technology, data scientists and AI professionals can pave the way for more sophisticated, context-aware, and human-like AI systems, ultimately enriching our interaction with technology and the world around us. 


Excel Enthusiast to Supply Chain Innovator – The Journey to Building One of the Largest Analytic Platforms

While ChatGPT has helped raise awareness about AI capabilities, explaining how to integrate AI has presented challenges, especially when managing over 200 different data analytic reports. To address the different uses, Miranda has simplified AI into three categories: rule-based AI, learning AI (machine learning), and generative AI. Generative AI has emerged as the most dynamic tool among the three for executing and recording data analytics. Its versatility and adaptability make it particularly effective in capturing and processing diverse data sets, contributing to more comprehensive analytics outcomes. Miranda says, “People in analytics might not jump out of bed excited to tackle documentation, but it's a critical aspect of our work. Without proper documentation, we risk becoming a single point of failure, which is something we want to avoid.” ... These recordings are then converted into transcripts and securely stored in a containerized environment, streamlining the documentation process while ensuring data security. Because of process automation, Miranda says that the organization generated 240,000 work hours last year, and they anticipate even more this year.



Quote for the day:

"Life is like riding a bicycle. To keep your balance you must keep moving." -- Albert Einstein

Daily Tech Digest - July 01, 2024

The dangers of voice fraud: We can’t detect what we can’t see

The inherent imperfections in audio offer a veil of anonymity to voice manipulations. A slightly robotic tone or a static-laden voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voice fraud not only effective but also remarkably insidious. Imagine receiving a phone call from a loved one’s number telling you they are in trouble and asking for help. The voice might sound a bit off, but you attribute this to the wind or a bad line. The emotional urgency of the call might compel you to act before you think to verify its authenticity. Herein lies the danger: Voice fraud preys on our readiness to ignore minor audio discrepancies, which are commonplace in everyday phone use. Video, on the other hand, provides visual cues. There are clear giveaways in small details like hairlines or facial expressions that even the most sophisticated fraudsters have not been able to get past the human eye. On a voice call, those warnings are not available. That’s one reason most mobile operators, including T-Mobile, Verizon and others, make free services available to block — or at least identify and warn of — suspected scam calls.


Provider or partner? IT leaders rethink vendor relationships for value

Vendors achieve partner status in McDaniel’s eyes by consistently demonstrating accountability and integrity; getting ahead of potential issues to ensure there’s no interruptions or problems with the provided products or services; and understanding his operations and objectives. ... McDaniel, other CIOs, and CIO consultants agree that IT leaders don’t need to cultivate partnerships with every vendor; many, if not most, can remain as straight-out suppliers, where the relationship is strictly transactional, fixed-fee, or fee-for-service based. That’s not to suggest those relationships can’t be chummy, but a good personal rapport between the IT team and the supplier’s team is not what partnership is about. A provider-turned-partner is one that gets to know the CIO’s vision and brings to the table ways to get there together, Bouryng says. ... As such, a true partner is also willing to say no to proposed work that could take the pair down an unproductive path. It’s a sign, Bouryng says, that the vendor is more interested in reaching a successful outcome than merely scheduling work to do.


In the AI era, data is gold. And these companies are striking it rich

AI vendors have, sometimes controversially, made deals with organizations like news publishers, social media companies, and photo banks to license data for building general-purpose AI models. But businesses can also benefit from using their own data to train and enhance AI to assist employees and customers. Examples of source material can include sales email threads, historical financial reports, geographic data, product images, legal documents, company web forum posts, and recordings of customer service calls. “The amount of knowledge—actionable information and content—that those sources contain, and the applications you can build on top of them, is really just mindboggling,” says Edo Liberty, founder and CEO of Pinecone, which builds vector database software. Vector databases store documents or other files as numeric representations that can be readily mathematically compared to one another. That’s used to quickly surface relevant material in searches, group together similar files, and feed recommendations of content or products based on past interests. 


Machine Vision: The Key To Unleashing Automation's Full Potential

Machine vision is a class of technologies that process information from visual inputs such as images, documents, computer screens, videos and more. Its value in automation lies in its ability to capture and process large quantities of documents, images and video quickly and efficiently in quantities and speeds far in excess of human capability. ... Machine vision based technologies are even becoming central to the creation of automations themselves. For example, instead of relying on human workers to describe processes that are being automated when designing automations, recordings of the process to be automated are created and then machine vision software, combined with other technologies, is used to capture the process end-to-end and then provide the input to automating a lot of the work needed to program the digital workers (bots). ... Machine vision is integral to maximizing the impact of advanced automation technologies on business operations and paving the way for increased capabilities in the automation space.


Put away your credit cards — soon you might be paying with your face

Biometric purchases using facial recognition are beginning to gain some traction. The restaurant CaliExpress by Flippy, a fully automated fast-food restaurant, is an early adopter. Whole Food stores offer pay-by-palm, an alternative biometric to facial recognition. Given that they are already using biometrics, facial recognition is likely to be available in their stores at some point in the future. ... Just as credit and debit cards have overtaken cash as the dominant means to make purchases, biometrics like facial recognition could eventually become the dominant way to make purchases. There will however be actual costs during such a transition, which will largely be absorbed by consumers in higher prices. The technology software and hardware required to implement such systems will be costly, pushing it out of reach for many small- and medium-size businesses. However, as facial recognition systems become more efficient and reliable, and losses from theft are reduced, an equilibrium will be achieved that will make such additional costs more modest and manageable to absorb.


Technologists must be ready to seize new opportunities

For technologists, this new dynamic represents a profound (and daunting) change. They’re being asked to report on application performance in a more business-focussed, strategic way and to engage in conversations around experience at a business level. They’re operating outside their comfort zone, far beyond the technical reporting and discussions they’ve previously encountered. Of course, technologists are used to rising to a challenge and pivoting to meet the changing needs of their organisations and their senior leaders. We saw this during the pandemic, many will (rightly) be excited about the opportunity to expand their skills and knowledge, and to elevate their standing within their organisations. The challenge that many technologists face, however, is that they currently don’t have the tools and insights they need to operate in a strategic manner. Many don’t have full visibility across their hybrid environments and they’re struggling to manage and optimise application availability, performance and security in an effective and sustainable manner. They can’t easily detect issues, and even when they do, it is incredibly difficult to quickly understand root causes and dependencies in order to fix issues before they impact end user experience. 


Vulnerability management empowered by AI

Using AI will take vulnerability management to the next level. AI not only reduces analysis time but also effectively identifies threats. ... AI-driven systems can identify patterns and anomalies that signify potential vulnerabilities or attacks. Converting the logs into data and charts will make analysis simpler and quicker. Incidents should be identified based on the security risk, and notification should take place for immediate action. Self-learning is another area where AI can be trained with data. This will enable AI to be up-to-date on the changing environment and capable of addressing new and emerging threats. AI will identify high-risk threats and previously unseen threats. Implementing AI requires iterations to train the model, which may be time-consuming. But over time, it becomes easier to identify threats and flaws. AI-driven platforms constantly gather insights from data, adjusting to shifting landscapes and emerging risks. As they progress, they enhance their precision and efficacy in pinpointing weaknesses and offering practical guidance.


Why every company needs a DDoS response plan

Given the rising number of DDoS attacks each year and the reality that DDoS attacks are frequently used in more sophisticated hacking attempts to apply maximum pressure on victims, a DDoS response plan should be included in every company’s cybersecurity tool kit. After all, it’s not just a temporary lack of access to a website or application that is at risk. A business’s failure to withstand a DDoS attack and rapidly recover can result in loss of revenue, compliance failures, and impacts on brand reputation and public perception. Successful handling of a DDoS attack depends entirely on a company’s preparedness and execution of existing plans. Like any business continuity strategy, a DDoS response plan should be a living document that is tested and refined over the years. It should, at the highest level, consist of five stages, including preparation, detection, classification, reaction, and postmortem reflection. Each phase informs the next, and the cycle improves with each iteration.


Reduce security risk with 3 edge-securing steps

Over the past several years web-based SSL VPNs have been targeted and used to gain remote access. You may even want to consider evaluating how your firm allows remote access and how often your VPN solution has been attacked or at risk. ... “The severity of the vulnerabilities and the repeated exploitation of this type of vulnerability by actors means that NCSC recommends replacing solutions for secure remote access that use SSL/TLS with more secure alternatives,” the authority says. “The NCSC recommends internet protocol security (IPsec) with internet key exchange (IKEv2). Other countries’ authorities have recommended the same.” ... Pay extra attention to how credentials that need to be accessed are protected from unauthorized access. Ensure that you use best practice processes to secure passwords and ensure that each user has appropriate passwords and access accordingly. ... When using cloud services, you need to ensure that only those vendors you trust or that you have thoroughly vetted have access to your cloud services. 

The real key to machine learning success is something that is mostly missing from genAI: the constant tuning of the model. “In ML and AI engineering,” Shankar writes, “teams often expect too high of accuracy or alignment with their expectations from an AI application right after it’s launched, and often don’t build out the infrastructure to continually inspect data, incorporate new tests, and improve the end-to-end system.” It’s all the work that happens before and after the prompt, in other words, that delivers success. For genAI applications, partly because of how fast it is to get started, much of this discipline is lost. ... As with software development, where the hardest work isn’t coding but rather figuring out which code to write, the hardest thing in AI is figuring out how or if to apply AI. When simple rules need to yield to more complicated rules, Valdarrama suggests switching to a simple model. Note the continued stress on “simple.” As he says, “simplicity always wins” and should dictate decisions until more complicated models are absolutely necessary.



Quote for the day:

“The vision must be followed by the venture. It is not enough to stare up the steps - we must step up the stairs.” -- Vance Havner

Daily Tech Digest - June 30, 2024

The Unseen Ethical Considerations in AI Practices: A Guide for the CEO

AI’s “black box” problem is well-known, but the ethical imperative for transparency goes beyond just making algorithms understandable and its results explainable. It’s about ensuring that stakeholders can comprehend AI decisions, processes, and implications, guaranteeing they align with human values and expectations. Recent techniques, such as reinforcement learning from human feedback (RLHF) that aligns AI outcomes to human values and preferences, confirm that AI-based systems behave ethically. This means developing AI systems in which decisions are in accordance with human ethical considerations and can be explained in terms that are comprehensible to all stakeholders, not just the technically proficient. Explainability empowers individuals to challenge or correct erroneous outcomes and promotes fairness and justice. Together, transparency and explainability uphold ethical standards, enabling responsible AI deployment that respects privacy and prioritizes societal well-being. This approach promotes trust, and trust is the bedrock upon which sustainable AI ecosystems are built.Long-


Cyber resilience - how to achieve it when most businesses – and CISOs – don’t care

Organizations should ask themselves some serious, searching questions about why they are driven to keep doing the same thing over and over again – while spending millions of dollars in the process. As Bathurst put it: Why isn't security by design built in at the beginning of these projects, which are driving people to make the wrong decisions – decisions that nobody wants? Nobody wants to leave us open to attack. And nobody wants our national health infrastructure, ... But at this point, we should remind ourselves that, despite that valuable exercise, both the Ministry of Defence and the NHS have been hacked and/or subjected to ransomware attacks this year. In the first case, via a payroll system, which exposed personal data on thousands of staff, and in the second, via a private pathology lab. The latter incursion revealed patient blood-test data, leading to several NHS hospitals postponing operations and reverting to paper records. So, the lesson here is that, while security by design is essential for critical national infrastructure, resilience in the networked, cloud-enabled age must acknowledge that countless other systems, both upstream and downstream, feed into those critical ones.


Prominent Professor Discusses Digital Transformation, the Future of AI, Tesla, and More

“Customers are always going to have some challenges, and there are constant new technological trends evolving. Digital transformation is about intentionally moving towards making the experience more personalized by weaving new technology applications to solve customer challenges and deliver value,” shared Krishnan. However, as machine learning and GenAI help companies personalize their products and services, the tools themselves are also becoming more niche. “I think we’ll move to more domain and industry-specific generative AI and large language models. The healthcare industry will have an LLM, consumer packaged goods, education, etc,” shared Krishnan. “However, because companies will protect their own data, every large organization will create its own LLM with the private data. That’s why generative AI is interesting because it can actually get to be more personalized while also leveraging the broader knowledge. Eventually, we may all have our own individual GPTs.” ... Although new technologies such as GenAI and machine learning have had an immense impact in such a short time, Krishnan warns that guardrails are necessary, especially as our use of these tools becomes more essential.


Enhancing Your Company’s DevEx With CI/CD Strategies

Cognitive load is the amount of mental processing necessary for a developer to complete a task. Companies generally have one programming language that they use for everything. Their entire toolchain and talent pool is geared toward it for maximum productivity. On the other hand, CI/CD tools often have their own DSL. So, when developers want to alter the CI/CD configurations, they must get into this new rarely-used language. This becomes a time sink as well as causes a high cognitive load. One of the ways to avoid giving developers high cognitive load tasks without reason is to pick CI/CD tools that use a well-known language. For example, the data serialization language YAML — not always the most loved — is an industry standard that developers would know how to use. ... In software engineering, feedback loops can be measured by how quickly questions are answered. Troubleshooting issues within a CI/CD pipeline can be challenging for developers due to the need for more visibility and information. These processes often operate as black boxes, running on servers that developers may not have direct access to with software that is foreign to developers. 


Digital Accessibility: Ensuring Inclusivity in an Online World

"It starts by understanding how people with disabilities use your online platform," he said. While the accessibility issues faced by people who are blind receive considerable attention, it's crucial to address the full spectrum of disabilities that affect technology use, including auditory, cognitive, neurological, physical, speech, and visual disabilities, Henry added. ... The key is to review accessibility during content creation with a diverse group of people and address their feedback in iterations early and often. Bhowmick added that accessibility testing should always be run according to a structured testing script and mature testing methodologies to ensure reliable, reproducible, and sustainable test results. It is important to run accessibility testing during every stage of the software lifecycle: during design, before handing over the design to development, during development, and after development. A professional and thorough testing should take place before releasing the product to customers, Bhowmick said, and the test results should be made available in an accessibility conformance report (ACR) following the Voluntary Product Accessibility Template (VPAT) format.


How Cloud-Native Development Benefits SaaS

Cloud-native practices, patterns, and technologies enhance the benefits of SaaS and COTS while reducing the inherent negatives by:Providing an extensible framework for adding new capabilities to commercial applications without having to customize the core product. Leveraging API and event-driven architecture to bypass the need for custom data integrations. Still offloading the complexity of most infrastructure and security concerns to a provider while gaining additional flexibility in scale and resilience implementation. Enabling opportunities to innovate core business systems with emerging technologies such as generative AI. Enterprises relying on SaaS or COTS still need the flexibility to meet their ever-evolving business requirements. As we have seen with advances in AI over the past year, change and opportunity can arrive quickly and without warning. Chances are that your organization is already on a journey to cloud-native maturity, so take advantage of this effort by implementing technologies and patterns, like leveraging event-driven architectures and serverless functions to extend your commercial applications rather than customizing or replacing them.


Cybersecurity as a Service Market: A Domain of Innumerable Opportunities

Although traditional cybersecurity differs from cybersecurity as a service. As per the budget, size, and regulatory compliance requirements, several approaches are required. Organizations are finding it tedious to rely completely on themselves. The conventional method of fabricating an internal security team is to hire an experienced security staff who are dedicated to performing cyber security duties. While CSaaS is an option where the company outsources the security facility. A survey found that almost 72.1% of businesses find CSaaS solutions critical for their customer strategy. Let us now understand cyber security as a service market growth aspect. ... Some of the challenges in the market growth are lack of training and inadequate workforce, limited security budget among SMEs, and lack of interoperability with the information. The market in North America currently accounts for the maximum share of the revenue of the worldwide market. The growth of the market can be attributed to the high level of digitalization and the surge in the number of connected devices in the countries is projected to remain growth-propelling factors. 


Top 5 (EA) Services Every Team Lead Should Know

The topic of sustainability is on everyone’s priority list these days. It has become an integral part of sociopolitical and global concepts. Not to mention, more and more customers are asking for sustainable products and services. Or alternatively, they only want to buy from companies that act and operate sustainably themselves. Sustainability must therefore be on the strategic agenda of every company. ... To effectively collaborate with your enterprise IT and ensure the best possible support while you’re making IT-related investment decisions, your IT service providers require feedback. For this, your list of software applications must be known. Deficits and opportunities for improvement need to be identified and, above all, a coordinated investment strategy for your IT services is a must. It has to be clear how you can use your IT budget in the most efficient way. ... What do all these different services have to do with EA? A lot. If the above-mentioned services are understood as EA services, their results form a valuable contribution to the creation of a holistic view of your company – the enterprise architecture.


Ensuring Comprehensive Data Protection: 8 NAS Security Best Practices

NAS devices are convenient to use as shared storage, which means they should be connected to other nodes. Normally, those nodes are the machines inside an organization’s network. However, the growing number of gadgets per employee can lead to unintentional external connections. Internet of Things (IoT) devices are a separate threat category. Hackers can target these devices and then use them to propagate malicious codes inside corporate networks. If you connect such a device to your NAS, you risk compromising NAS security and then suffering a cyberattack. ... Malicious software remains a ubiquitous threat to any node connected to the network. Malware can steal, delete, and block access to NAS data or intercept incoming and outgoing traffic. Furthermore, the example of Stuxnet shows that powerful computer worms can disrupt and disable IT hardware or even entire production clusters. Insider threats. When planning an organization’s cybersecurity, IT experts reasonably focus on outside threats.


How to design the right type of cyber stress test for your organisation

The success of a cyber stress test largely depends on the realism and relevance of the scenarios and attack vectors used. These should be based on a thorough understanding of the current threat landscape, industry-specific risks, and emerging trends. Scenarios may range from targeted phishing campaigns and ransomware attacks to sophisticated, state-sponsored intrusions. By selecting scenarios that are plausible and aligned with your organisation’s risk profile, you can ensure that the stress test provides valuable insights and prepares your team for real-world challenges. ... A well-designed cyber stress test should encompass a range of activities, from table-top exercises and digital simulations to red team-blue team engagements and penetration testing. This multi-faceted approach allows you to assess the organisation’s capabilities across various domains, including detection, investigation, response, and recovery. Additionally, the stress test should include a thorough evaluation process, with clearly defined success criteria and mechanisms for gathering feedback and lessons learned.



Quote for the day:

“I'd rather be partly great than entirely useless.” -- Neal Shusterman

Daily Tech Digest - June 29, 2024

Urban Digital Twins: AI Comes To City Planning

Urban digital twin technology involves various tools and methods at each lifecycle phase, and because it is still an emerging field, there's a wide range of variability in available solutions. Different providers may focus on different aspects of the technology, offer varying levels of complexity, or specialize in specific use cases or lifecycle phases. Therefore, it's essential for organizations to carefully evaluate their requirements and compare the offerings of different providers to find the best fit for their specific needs. To make the most of urban digital twin technology, city officials and urban planners should first get a solid grasp on what it can do and the benefits it offers throughout a city's development. By aligning city goals to the capabilities of digital twin solutions at each lifecycle stage, teams can make sure they're picking the right tools for their specific needs. This way, cities can tailor their approach to urban digital twins, ensuring they're making the best choices to reach their desired outcomes and create a smarter, more efficient urban environment.


Empowering Citizen Developers With Low- and No-Code Tools

Whether you are building your own codeless platform or adopting a ready-to-use solution, the benefits can be immense. But before you begin, remember that the core of any LCNC platform is the ability to transform a user's visual design into functional code. This is where the real magic happens, and it's also where the biggest challenges lie. For an LCNC platform to help you achieve success, you need to start with a deep understanding of your target users. What are their technical skills? What kind of applications do they want to use? The answers to these questions will inform every aspect of your platform's design, from the user interface/user experience (UI/UX) to the underlying architecture. The UI/UX is crucial for the success of any LCNC platform, but it is just the tip of the iceberg. Under the hood, you'll need a powerful engine that can translate visual elements into clean, efficient code. This typically involves complex AI algorithms, data structures, and a deep understanding of various programming languages. 


Will AI replace cybersecurity jobs?

While AI and ML can streamline many cybersecurity processes, organizations cannot remove the human element from their cyberdefense strategies. Despite their capabilities, these technologies have limitations that often require human insight and intervention, including a lack of contextual understanding and susceptibility to inaccurate results, adversarial attacks and bias. Because of these limitations, organizations should view AI as an enhancement, not a replacement, for human cybersecurity expertise. AI can augment human capabilities, particularly when dealing with large volumes of threat data, but it cannot fully replicate the contextual understanding and critical thinking that human experts bring to cybersecurity. ... AI can automate threat detection and analysis by scanning massive volumes of data in real time. AI-powered threat detection tools can swiftly identify and respond to cyberthreats, including emerging threats and zero-day attacks, before they breach an organization's network. AI tools can also combat insider threats, a significant concern for modern organizations.


Decoding OWASP – A Security Engineer’s Roadmap to Application Security

While the OWASP Top 10 provides a foundational framework for understanding and addressing the most critical web application security risks, OWASP offers a range of other resources that can be instrumental in developing and refining an application security strategy. These include the OWASP Testing Guide, Cheat Sheets, and a variety of tools and projects designed to aid in the practical aspects of security implementation. OWASP Testing Guide – The OWASP Testing Guide is a comprehensive resource that offers a deep dive into the specifics of testing web applications for security vulnerabilities. It covers a wide array of potential vulnerabilities beyond the Top 10, providing guidance on how to rigorously test and validate each one. ... OWASP Cheat Sheets – The OWASP Cheat Sheets are concise, focused guides containing the best practices on a specific security topic. They serve as handy guides for security teams and developers to quickly reference when implementing security measures.Cheat sheets can be used as training materials to educate developers and security professionals on specific security issues and how to mitigate them.


Intel Demonstrates First Fully Integrated Optical I/O Chiplet

The fully Integrated OCI chiplet leverages Intel’s field-proven silicon photonics technology and integrates a silicon photonics integrated circuit (PIC), which includes on-chip lasers and optical amplifiers, with an electrical IC. The OCI chiplet demonstrated at OFC was co-packaged with an Intel CPU but can also be integrated with next-generation CPUs, GPUs, IPUs and other system-on-chips (SoCs). This first OCI implementation supports up to 4 terabits per second (Tbps) bidirectional data transfer, compatible with peripheral component interconnect express (PCIe) Gen5. The live optical link demonstration showcases a transmitter (Tx) and receiver (Rx) connection between two CPU platforms over a single-mode fiber (SMF) patch cord. ... The current chiplet supports 64 channels of 32 Gbps data in each direction up to 100 meters (though practical applications may be limited to tens of meters due to time-of-flight latency), utilizing eight fiber pairs, each carrying eight dense wavelength division multiplexing (DWDM) wavelengths.


Artificial General Intelligence (AGI): Understanding the Milestones

It was first proposed in the early 1900s to create a machine or a program that was capable of thinking and acting more like a person. The Turing Test, designed by Alan Turing in 1950 to assess intelligence comparable to that of humans, established the scenario. ... Machine learning emerged in the 1950s and 1960s as a result of statistical algorithms that could identify patterns in data and use them to make future decisions without external supervision. ... The Expert systems and symbolic AI centered on the encoding of knowledge and the application of rules and symbols in human reasoning. ... Deep learning, a subset of machine learning, has been a crucial breakthrough in the journey toward AGI. In tasks like speech and image recognition, Convolutional Neural Networks and Recurrent Neural Networks perform at a human-level of intelligence. ... AGI research has produced numerous important results, ranging from theoretical foundations to deep learning advances. Even if AGI remains ideal, present AI research is pushing the envelope, imagining a time when AI would fundamentally revolutionize our way of life and work for the better.


Unlocking Innovation: How Critical Thinking Supercharges Design Thinking

Critical thinking involves the objective analysis and evaluation of an issue to form a judgment. It's about questioning assumptions, discerning hidden values, evaluating evidence, and assessing conclusions. This methodical approach is crucial in professional environments for making informed decisions, solving complex problems, and planning strategically. ... Design thinking is a human-centered approach to innovation that integrates the needs of people, the possibilities of technology, and the requirements for business success. It involves five key stages: Empathize, Define, Ideate, Prototype, and Test. Design thinking promotes creativity, collaborative effort, and iterative learning. Merging critical thinking into the design thinking process enhances each stage with thorough analysis and robust evaluation, leading to innovative and effective solutions. ... Critical thinking provides the analytical rigor needed to identify core issues and evaluate solutions, while design thinking fosters creativity and user-centered design.


DAST Vs. Penetration Testing: Comprehensive Guide to Application Security Testing

Dynamic Application Security Testing (DAST) and penetration testing are crucial for identifying and mitigating security vulnerabilities in web application security. While both aim to enhance application security, they differ significantly in their approach, execution, and outcomes. ... Dynamic Application Security Testing (DAST) is an automated security testing methodology that interacts with a running web application to identify potential security vulnerabilities. DAST tools simulate real-world attacks by injecting malicious code or manipulating data, focusing on uncovering vulnerabilities that attackers could exploit. DAST evaluates the effectiveness of security controls within the application. ... Penetration testing is a security assessment process by skilled professionals, often called ethical hackers. While comprehensive and carried out by experienced professionals, manual testing can be time-consuming and expensive. These experts simulate real-world attacks to identify and exploit application, network, or system vulnerabilities. 


There is no OT apocalypse, but OT security deserves more attention

The whole narrative surrounding attacks on OT environments is therefore quite exaggerated as far as Van der Walt is concerned. “We are not in the OT apocalypse,” in his words. This is important to know, he believes. “In fact, there is a narrative in the market that is out to get organizations to take action and invest.” In other words, we hear more and more that OT environments are under constant attack. At the end of the day, these are actually attacks on organizations’ IT environments. ... “There does exist a very frightening risk that attackers can take over the OT environment,” as Derbyshire puts it. To demonstrate that, he has set up an attack and published about it in scientific circles. This should result in a better understanding of a real OT ransomware attack. ... Finally, it is worth noting that OT security does need more attention.  Above all, they want to contribute to the discussion about what an OT attack really is. As Van der Walt summarizes, “IT security has been around for about 25 years, OT security is still very young. We should have learned from our mistakes, so it shouldn’t take another 25 years to get OT security to where IT security is today.”


Manage AI threats with the right technology architecture

Amidst the dynamic market conditions, choosing a future-proof technology architecture for threat management becomes almost inevitable. This underscores the necessity of selecting the best technologies and the right strategic approach. ... The best-of-breed approach allows companies to respond flexibly to new threats and changes in business requirements. When a new technology comes to market, companies can easily integrate it without overhauling their entire security architecture. This promotes agile adaptation and quick implementation of new solutions to stay current with the latest technology. ... Managing an integrated platform is less complex than managing multiple independent systems. This reduces the training requirements for security staff and minimizes the risk of errors arising from the complexity of integrating different systems. ... Ultimately, the choice should efficiently meet the company’s security goals. It is crucial to invest in advanced technologies and ensure that expenditures are proportionate to the risk. This means that investments should be carefully weighed without incurring unnecessary costs.



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - June 28, 2024

AI success: Real or hallucination?

The biggest problem may not be compliance muster, but financial muster. If AI is consuming hundreds of thousands of GPUs per year, requiring that those running AI data centers canvas frantically in search of the power needed to drive these GPUs and to cool them, somebody is paying to build AI, and paying a lot. Users report that the great majority of the AI tools they use are free. Let me try to grasp this; AI providers are spending big to…give stuff away? That’s an interesting business model, one I personally wish was more broadly accepted. But let’s be realistic. Vendors may be willing to pay today for AI candy, but at some point AI has to earn its place in the wallets of both supplier and user CFOs, not just in their hearts. We have AI projects that have done that, but most CIOs and CFOs aren’t hearing about them, and that’s making it harder to develop the applications that would truly make the AI business case. So the reality of AI is buried in hype? It sure sounds like AI is more hallucination than reality, but there’s a qualifier. Millions of workers are using AI, and while what they’re currently doing with it isn’t making a real business case, that’s a lot of activity.


Space: The Final Frontier for Cyberattacks

"Since failing to imagine a full range of threats can be disastrous for any security planning, we need more than the usual scenarios that are typically considered in space-cybersecurity discussions," Lin says. "Our ICARUS matrix fills that 'imagineering' gap." Lin and the other authors of the report — Keith Abney, Bruce DeBruhl, Kira Abercromby, Henry Danielson, and Ryan Jenkins — identified several factors as increasing the potential for outer space-related cyberattacks over the next several years and decades. Among them is the rapid congestion of outer space in recent years as the result of nations and private companies racing to deploy space technologies; the remoteness of space; and technological complexity. ... The remoteness — and vastness of space — also makes it more challenging for stakeholders — both government and private — to address vulnerabilities in space technologies. There are numerous objects that were deployed into space long before cybersecurity became a mainstream concern that could become targets for attacks.


The perils of overengineering generative AI systems

Overengineering any system, whether AI or cloud, happens through easy access to resources and no limitations on using those resources. It is easy to find and allocate cloud services, so it’s tempting for an AI designer or engineer to add things that may be viewed as “nice to have” more so than “need to have.” Making a bunch of these decisions leads to many more databases, middleware layers, security systems, and governance systems than needed. ... We need to account for future growth,” but this can often be handled by adjusting the architecture as it evolves. It should never mean tossing money at the problems from the start. This tendency to include too many services also amplifies technical debt. Maintaining and upgrading complex systems becomes increasingly difficult and costly. If data is fragmented and siloed across various cloud services, it can further exacerbate these issues, making data integration and optimization a daunting task. Enterprises often find themselves trapped in a cycle where their generative AI solutions are not just overengineered but also need to be more optimized, leading to diminished returns on investment.
Data fabric is a design concept for integrating and managing data. Through flexible, reusable, augmented, and sometimes automated data integration, or copying of data into a desired target database, it facilitates data access across the business and data analysts. ... Physically moving data can be tedious, involving planning, modeling, and developing ETL/ELT pipelines, along with associated costs. However, a data fabric abstracts these steps, providing capabilities to copy data to a target database. Analysts can then replicate the data with minimal planning, reduced data silos, and enhanced data accessibility and discovery. Data fabric is an abstracted semantic-based data capability that provides the flexibility to add new data sources, applications, and data services without disrupting existing infrastructure. ... As the data volume increases, the fabric adapts without compromising efficiency. Data fabric empowers organizations to leverage multiple cloud providers. It facilitates flexibility, avoids vendor lock-in, and accommodates future expansion across different cloud environments.


DFIR and its role in modern cybersecurity

In incident response, digital forensics provides detailed insights to highlight the cause and sequence of events in breaches. This data is vital for successful containment, eradication of the danger, and recovery. Conducting post-incident forensic reports can similarly enhance security by pinpointing system vulnerabilities and suggesting actions to prevent future breaches. Incorporating digital forensics into incident response essentially allows you to examine incidents thoroughly, leading to faster recovery, enhanced security measures, and increased resilience to cyber threats. This partnership improves your ability to identify, evaluate, and address cyber threats thoroughly. ... Emerging trends and technologies are shaping the future of DFIR in cybersecurity. Artificial intelligence and machine learning are increasing the speed and effectiveness of threat detection and response. Cloud computing is revolutionising processes with its scalable options for storing and analysing data. Additionally, improved coordination with other cybersecurity sectors, such as threat intelligence and network security, leads to a more cohesive defence plan.


Ensuring Application Security from Design to Operation with DevSecOps

DevSecOps is as much about cultural transformation as it is about tools and processes. Before diving into technical integrations, ensure your team’s mindset aligns with DevSecOps principles. Underestimating the cultural aspects, such as resistance to change, fear of increased workload or misunderstanding the value of security, can impede adoption. You can address these challenges by highlighting the benefits of DevSecOps, celebrating successes and promoting a culture of learning and continuous improvement. Developers should be familiar with the nuances of the security tools in use and how to interpret their outputs. ... DevSecOps is a journey, not a destination. Regularly review the effectiveness of your tool integrations and workflows. Gather feedback from all stakeholders and define metrics to measure the effectiveness of your DevSecOps practices, such as the number of vulnerabilities identified and remediated, the time taken to fix critical issues and the frequency of zero-day attacks and other security incidents. 


Essential skills for leaders in industry 4.0

Agility enables swift adaptation to new technologies and market shifts, keeping your organisation competitive and innovative. Digital leaders must capitalise on emerging opportunities and navigate disruptions such as technological advancements, shifting consumer preferences, and increased global competition. ... Effective communication is vital for digital leadership, especially when implementing organisational change. Inspiring positive, incremental change requires empowering your team to work towards common business goals and objectives. Key communication skills include clarity, precision, active listening, and transparency. ... Empathy is essential for guiding your team through digital transformation. True adoption demands conviction from top leaders and a determined spirit throughout the organisation. Success lies in integrating these concepts into the company’s operations and culture. Acknowledge that change can be overwhelming, and by addressing employees' stressors proactively, you can secure their support for strategic initiatives. ... Courage is indispensable for digital leaders, requiring the embrace of risk to ensure success. 


Platform as a Runtime - The Next Step in Platform Engineering

It is almost impossible to ensure that all developers 100% comply with all the system's non-functional requirements. Even a simple thing like input validations may vary between developers. For instance, some will not allow Nulls in a string field, while others allow Nulls, causing inconsistency in what is implemented across the entire system. Usually, the first step to aligning all developers on best practices and non-functional requirements is documentation, build and lint rules, and education. However, in a complex world, we can’t build perfect systems. When developers need to implement new functionality, they are faced with trade-offs they need to make. The need for standardization comes to mitigate scaling challenges. Microservices is another solution to try and handle scaling issues, but as the number of microservices grows, you will start to face the complexity of a Large-Scale Microservices environment. In distributed systems, requests may fail due to network issues. Performance is degraded since requests flow across multiple services via network communication as opposed to in-process method calls in a Monolith. 


The distant world of satellite-connected IoT

The vision is that IoT, and mobile phones, will be designed so that as they cross out of terrestrial connectivity, they can automatically switch to satellite. Devices will no longer be either or, they will be both, offering a much more reliable network as when a device loses contact with the terrestrial network and permanently available alternative can be used. “Satellite is wonderful from a coverage perspective,” says Nuttall. “Anytime you see the sky, you have satellite connectivity. The challenge lies in it being a separate device, and that ecosystem has not really proliferated or grown at scale.” Getting to that point, MacLeod predicts that we will first see people using 3GPP-type standards over satellite links, but they won’t immediately be interoperating. “Things can change, but in order to make the space segment super efficient, it currently uses a data protocol that's referred to as NIDD - non-IP-based data delivery - which is optimized for trickier links,” explains MacLeod. “But NB-IoT doesn’t use it, so the current style of addressing data communication in space isn’t mirrored by that on the ground network. Of course, that will change, but none of us knows exactly how long it will take.”


Navigating the cloud: How SMBs can mitigate risks and maximise benefits

SMBs often make several common mistakes when it comes to cloud security. By recognizing and addressing these blind spots, organizations can significantly enhance their cybersecurity. One major mistake is placing too much trust in the cloud provider. Many IT leaders assume that investing in cloud services means fully outsourcing security to a third party. However, security responsibilities are shared between the cloud service provider (CSP) and the customer. The specific responsibilities depend on the type of cloud service and the provider. Another common error is failing to back up data. Organisations should not assume that their cloud provider will automatically handle backups. It's essential to prepare for worst-case scenarios, such as system failures or cyberattacks, as lost data can lead to significant downtime, productivity, and reputation losses. Neglecting regular patching also exposes cloud systems to vulnerabilities. Unpatched systems can be exploited, leading to malware infections, data breaches, and other security issues. Regular patch management is crucial for maintaining cloud security, just as it is for on-premises systems.



Quote for the day:

"What seems to us as bitter trials are often blessings in disguise." -- Oscar Wilde