Daily Tech Digest - November 08, 2023

Iconic Singapore hotel caught up in major data breach

The breach was first identified on 20 October, having begun a day previously when an undisclosed third-party gained unauthorised access to the firm’s systems. “Upon discovery of the incident, our teams immediately took action to resolve it. Investigations have since determined that an unknown third party accessed customer data of about 665,000 non-casino rewards programme members,” MBS said in a statement. “Based on our investigation, we do not have evidence to date that the unauthorised third party has misused the data to cause harm to customers. “We do not believe that membership data from our casino rewards programme, Sands Rewards Club, was affected. “After learning of the issue, we quickly launched an investigation, have been working with a leading external cyber security firm, and have taken action to further strengthen our systems and protect data,” said the organisation. The compromised data is understood to include names, email addresses, mobile phone and landline numbers, countries of residence, and membership numbers and tier status. MBS is reaching out to those affected.


8 ways to fix open source funding

Richard Stallman famously said, “Free as in speech, not as in beer.” Now, some developers are creating licenses that don’t offer either senses of freedom—but they’re still delivering just enough of the kind of openness that satisfies their users’ curiosity. One version is the “free tier” that offers enough access to test new ideas and maybe run a small, personal website while still charging for more substantial use. Developers encounter no impediment when they’re just experimenting, but if they want to start something serious, they need to pay. Another example is the license that lets users read but not distribute. One developer told me that he routinely lets paying customers get full access to the code for audits or experimentation, but he does not release it into the open. The customers get to see what they want, but they can’t undercut the company or give away the software for free. These licenses deliver some of what made open source popular without sacrificing the ability to compel payment.


Meet Your New Cybersecurity Auditor: Your Insurer

The current state of cyber insurance offers some actionable opportunities for security decision-makers. First, don't underestimate the power of an accurate cyber-insurance self-assessment, which is how cyber insurers judge organizations during the auditing and claims processes. Current self-assessment surveys ask surprisingly challenging questions and cover a wide set of fields from backups to AD security to MFA. It is important not to treat this as a formality and to ensure that information is entirely accurate; insurers are more than willing to decline coverage and even sue if an enterprise falsely claims, for example, that it has MFA protection across all its digital assets. ... Therefore, the second step is for CISOs to prove they have the capability they have on those forms. Luckily, this is a landscape familiar to seasoned CISOs. Creating and maintaining detailed records, building reporting systems, documenting all relevant business and security processes, and creating tamper-proof data for cyber forensics are all possible with sophisticated cybersecurity tools.


Green data centres: Efforts to push sustainable IT developments

Green data centres are spearheading a transformative wave in the IT industry, bringing substantial benefits to both businesses and the environment. From a financial perspective, these eco-conscious facilities deliver remarkable cost savings. By leveraging energy-efficient technologies and renewable energy sources, companies can significantly reduce operational expenses. Innovative solutions like data reduction technology and automated resource optimisation further bolster these financial advantages. However, the influence of green data centres extends far beyond financial gains. They play a pivotal role in mitigating greenhouse gas emissions, actively contributing to the fight against climate change. As data centres and communication sectors are anticipated to account for up to 3.9% of global emissions, the adoption of renewable energy sources and energy-efficient practices dramatically reduces their carbon footprint. In doing so, green data centres are setting a commendable example for other industries, driving the broader adoption of sustainable practices. 


The 3 key stages of ransomware attacks and useful indicators of compromise

Once hackers find key data, they will begin to download the actual ransomware payload. They may exfiltrate data, set up an encryption key, and then encrypt the vital data. IoCs at this stage include communication with a C2 server, data movement (if the attacker is exfiltrating important data before they encrypt it) and unusual activity around encrypted traffic. Detecting at this stage involves more advanced security products working in unison. Model chaining different types of analytics together is an efficient way to catch minor indicators of compromise when it comes to ransomware because they gather context on the network in real-time, allowing SOC teams to identify anomalous behavior when it occurs. If a security alert is triggered, these other analytics can provide more context to help piece together if and how a larger attack is occurring. But many successful ransomware attacks will not trip antivirus at all, so assembling an accurate picture of user behaviors and compiling the numerous indicators into a coherent timeline is vital.


What's possible in a zero-ETL future?

ETL frequently requires data engineers to write custom code. Then, DevOps engineers or IT administrators have to deploy and manage the infrastructure to make sure the data pipelines scale. And when the data sources change, the data engineers have to manually change their code and deploy it again. Furthermore, when data engineers run into issues, such as data replication lag, breaking schema updates, and data inconsistency between the sources and destinations, they have to spend time and resources debugging and repairing the data pipelines. ... Zero-ETL enables querying data in place through federated queries and automates moving data from source to target with zero effort. This means you can do things like run analytics on transactional data in near real-time, connect to data in software applications, and generate ML predictions from within data stores to gain business insights faster, rather than having to move the data to a ML tool. You can also query multiple data sources across databases, data warehouses, and data lakes without having to move the data. To accomplish these tasks, we've built a variety of zero-ETL integrations between our services to address many different use cases.


An Ethical Approach to Employee Poaching

The practice of employee poaching isn’t without risk, because it hurts companies to lose good employees and relationships can get fractured. This is one reason why many public sector organizations insist on notifying the organization that could potentially lose an employee in advance of even scheduling an interview with a job candidate from that organization. On the private sector side, there are no such rules, but there is an etiquette for employee poaching that seems to work. ... When it comes to poaching, your employees need to know about tampering, too. It’s often employees who start the poaching process. They develop relationships with employees in a partner organization, and it is natural to want to work together. Nevertheless, there is a fine line between just wanting to work together and a situation that escalates into aggressive recruitment (and unacceptable tampering). The best practice is to remind employees about tampering, and to explain what it is, so that employees don’t actively recruit individuals from partner organizations without going through proper channels.


Data Management for M&A: 14 Best Practices Before and After the Deal

After the deal is complete, you can begin executing your integration strategy. You have already performed due diligence on the data landscape of both parties of the merger, created the integration plan, and estimated the workload. The steps and practices below do not have to be executed in the exact order or completeness. They represent the best practices for ensuring data quality, accessibility, privacy, usability, and transparency. You should start with the activity that best corresponds to your data pains and business objectives. ... After you have set the foundations for the effective use of data, you need to focus on getting data into shape (and keeping it that way) for critical business processes, reports, models, and data products. After all, to get real benefits from the acquired data, it is necessary to integrate it. However, as we know, 88% of data integration projects fail or overrun their budgets because of poor data quality


Keep it secret, keep it safe: the essential role of cybersecurity in document management

Solberg says security considerations should be an integral component of any strategic assessment for document management. “For example, when identifying the key objectives organizations may typically identify increased efficiency, reduced costs, increased collaboration,” he says. “Given the significant cyber risks organizations face in our rapidly digitized world, it's essential that the organization also clearly articulate an objective to protect the data, documents, and systems from the outset.” Security must also be incorporated in the phases of the document management assessment, including the analysis of the current state and the articulation of the roadmap, according to Solberg. “The integration of cybersecurity in these phases not only helps to identify the baseline compliance requirements that will inform the strategy but the capabilities that the organization will need to meet those requirements,” he adds. Security is a key enabler of success within any organization and has become a top strategic priority for all successful Internet-connected companies, says Jeffrey Bernstein


Many CIOs are better equipped to combat rising IT costs. Are you?

IT organizations can save substantial amounts on SaaS contracts by lowering service levels, CIOs say. “Too often we pay for the tier above what we need,” says McKee. But while Wiedenbeck did change service levels in one situation, he urges caution. “It’s dangerous to get so focused on cost that you start looking for ways to reduce it without better understanding the risks,” he says. “On the flip side, we shouldn’t be so fearful of any risk that we overpay for services and service levels. Inflation shouldn’t make us abandon balance management of cost, risk, and value, [but] I do see it as a great opportunity to revisit those areas and see if we’re willing to adjust that balance.” Partnering with software vendors is another key to keep costs under control. It should be a mutually beneficial relationship, CIOs say, so be prepared for some give and take. “There’s typically more flexibility on pricing if there’s added value that can found, for example, by introducing other clients or integrating products together, creating a win-win situation,” says McKee.



Quote for the day:

“Good manners sometimes means simply putting up with other people's bad manners. ” -- H. Jackson Brown, Jr

Daily Tech Digest - November 07, 2023

Where businesses use honeypots as part of their defences, they typically rely on traditional honeypots, i.e. a non-existent computer on the network or perhaps an entire network range, and then alert on any attempt to connect to the computer or range. This can be effective, for instance by identifying an attacker who has gained access to the internal network and is port-scanning the entire range. However, many advanced attackers do not resort to ‘noisy’ techniques such as port scanning once on the internal network, they instead often rely on subtle lateral movement such as obtaining network maps and connecting directly to servers of interest. To catch such advanced attackers requires more sophisticated honeypots. Attackers will often attempt to obtain administrative credentials to aid their movement around networks. They can do this by a number of means, from password-guessing attacks against administrative accounts, to more advanced attacks that allow them to carry out actions with the permissions of anyone using the computer they are accessing. 


What Happens When You Lose Your Cyber Insurance?

If there has been outright fraud or misrepresentation on the application, the loss of coverage could be sudden. In most cases, companies will not find themselves unexpectedly without insurance. “You're going to have notice, whether that's 60 to 90 days out,” says Cigarroa. Even with notice, organizations will be working against the clock. Can they get new coverage in time to avoid a gap? If an enterprise does experience a gap in coverage, any costs associated with a data breach or cyberattack that occurs during that period will not be offset by insurance. The prospect of getting new coverage also means that companies will have a new retroactive date for coverage. If an incident that dates back months or even years is uncovered, the new policy is very unlikely to cover it. “You are not going to be able to go back and cover things that happened under the prior insurance or especially during that window of time between when the last policy was cancelled or lapsed to when the new policy is placed,” says Moss.


Platform Engineering — Navigating Today, Forecasting Tomorrow

Platform engineering emerges as a radical shift in the world of software development and DevOps. It’s where traditional coding meets modern operational practices, aiming to keep pace with our ever-evolving tech scene. Here’s a quick dive into what’s shaking things up in platform engineering: Early bird gets the worm — The shift-left approach: The idea is simple. Tackle tasks upfront in the development process. By catching potential issues early, we boost efficiency and save headaches down the line. Walking the golden path: Imagine giving developers a roadmap through the software delivery life cycle, one that encourages freedom but within clear guidelines. ... As we dive deeper into the world of platform engineering, it’s clear that AI is shaking things up in a big way, making our work smoother and more user-friendly. Just imagine, operations teams, the unsung heroes behind the scenes, are starting to think more like product managers. Their focus is shifting from just keeping the lights on to genuinely enhancing the tools and systems developers use. And as with any major change, it’s crucial to manage the transition well. 


The impact of public cloud price hikes

Price hikes are inevitable. Those that can produce, operate, and maintain cloud services also have their own increasing bills for talent, power, and regulatory issues that make running a public cloud service challenging. Switching cloud service providers is not easy for enterprises that spent much money and time leveraging services native to specific public clouds. Moving to another cloud provider is out of the budget for most enterprises. It just costs too much to switch. Lock-in was always a known risk, and the more we optimize our applications and data sets for a specific cloud provider, the more we’re stuck with that provider. It’s a catch-22. If we choose not to use native cloud features, we give up the ability to get the most from the public cloud platforms. As a result, customers are effectively locked into their cloud service provider. The lack of mobility leaves customers at the mercy of price increases. ... Most need better options. Cloud services are vital for their operations; they feel they must find the money when the price rises. We may even see this with businesses that consume cloud services indirectly, such as retail, streaming services, and personal cloud storage systems. Those higher cloud costs are passed along to indirect cloud consumers as well.


7 Ways to Become a Software Engineer Without a Degree

You don't need a CS degree to work as a software engineer, but having some type of document that attests to your ability to program is important for landing job interviews. That's why it's worth investing time in earning at least one or two technical certifications related to programming. Some online programming courses, such as Coursera's "Introduction to Software Engineering," offer certificates of completion. You can also pay for training and certification in software engineering from organizations like the IEEE Computer Society. ... Enrolling in a coding bootcamp — which is an accelerated program that promises to teach students how to code in as little as a couple of months — has become a popular path for folks who want to work in software engineering without obtaining college degrees. There are many potential downsides of coding bootcamps. They often require you to attend class full-time, which means you can't work a job while attending the bootcamp, and there is no guarantee that you'll finish successfully. Nor is there a guarantee that employers will view completion of a coding bootcamp as evidence that you're truly prepared for a software engineering job.


3 Leadership Secrets That Lead to Team Empowerment

The biggest obstacle to team empowerment is the failure of leaders to enable people to make their own decisions — most often, because they did not provide the strategic blueprint to guide those decisions. Even I can occasionally be guilty of this. When our company hits a point of change, I can be inclined to require the escalation of a decision up to me. Sometimes, that is necessary, but even after making that decision, I still need to remember to let go again and re-establish the blueprint so everyone else will be empowered to make that decision moving forward. ... Part of empowering a team is recognizing those capable of working with greater responsibility and those who might not be comfortable with the level of risk involved. It can also mean compensating high-performers at the level of their contribution if they do not want to rise to another level. In my first job with a software company, I worked with a colleague who was a phenomenal technical writer and was comfortable remaining in that role. The head of the department recognized his skill, too, and changed the pay structure so individuals could be paid at a level similar to a manager or above because of the superior level of their work.


2024 network plans dogged by uncertainty, diverging strategies

Every network operator and vendor will stand up and pledge standards support, but every vendor will at the same time design their products and strategies to pull through their whole portfolio. Add different strokes for all those vendor folks to different mission drivers, and you understand why 79 of those 83 CIOs say they don’t really have a “single network architecture model” in place, and 48 say they’re actually moving away from a standard approach. Virtualization can make the unreal look real, so why not allow multiple personalized “unreals”? Look into the virtual-network mirror and you see...yourself. Nowhere is this more visible than in the management space. A couple decades ago, companies had a “network operations center” and a “single pane of glass” to show network status was the goal. Only 14 of 83 enterprises said they really had a NOC today, and when asked about a single pane of glass, one CIO quipped “I have five single panes of glass!” CIOs say that the current craze in “observability” is a response to the fact that it’s become very difficult to determine what the cause of an outage is.


Overheating datacenter stopped 2.5 million bank transactions

DBS and Citibank, the banks involved, experienced outages in the mid-afternoon of October 14, 2023 that resulted in full or partial unavailability of online banking apps for around two days – leaving customers and vendors without a way to make payments in a city-state that is increasingly reliant on digital financial systems. ... The root cause of the outages was issues in the cooling system that caused the temperature to rise above optimal operating range at the Equinix datacenter used by both institutions. Equinix has reportedly blamed a contractor, alleging that person "incorrectly sent a signal to close the valves from the chilled water buffer tanks" during a planned system upgrade. Upon the outage, both banks immediately activated IT disaster recovery and business continuity plans. "However," according to Tan, "both banks encountered technical issues which prevented them from fully recovering their affected systems at their respective backup datacenters – DBS due to a network misconfiguration and Citibank due to connectivity issues." Tan concluded that the two banks had "fallen short" of MAS requirements to ensure critical IT systems are resilient. 


Breaking down data silos for digital success

To break down data and political silos to drive democratization and standardization of data, technology research and advisory firm ISG started by rolling out an initiative to build and deliver a common data platform, says Kathy Rudy, chief data and analytics officer at ISG. Those spearheading the effort briefed leaders of the business units about it and asked for their support, as they approached data owners across their businesses. In preparation for the rollout, “we inventoried our data across the organization and categorized by type, owner, platform, data usage, data formats, terminology, etc.,” Rudy says. “With that knowledge, we built a data dictionary and common taxonomy.” Having this information ahead of time was critical to build trust and cooperation from data owners, whose participation was key to the program success, Rudy says. “Our understanding of their data and its structure allowed us to have pragmatic conversations about the effort required to create the common taxonomy and data structure necessary to allow for better access, usage, and monetization of data across the company,” Rudy says.


A Practical Guide to Mitigating DevOps Backlog

Business requirements are dynamic and evolve over time, adding complexity to the DevOps backlog. ... Managing these new environments and ensuring compliance with relevant regulations adds to the workload of the DevOps team and adds to the DevOps backlog stockpile. ... The symbiotic relationship between business growth and technological advancements is undeniable. As time progresses, the architecture inevitably becomes more intricate, resulting in a growing number of tasks and challenges within the DevOps backlog. The complexities surrounding DevOps resonate with numerous companies, yet the proactive approach to addressing them remains an issue. ... Observability is the key to implementing effective monitoring practices for applications. This involves setting up alerts, capturing relevant metrics and configuring dashboards to track application performance and resource utilization. Prioritizing observability empowers teams to proactively identify and resolve any issues within the DevOps pipeline, ensuring a stable and reliable environment for continuous improvement.



Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti

Daily Tech Digest - November 06, 2023

Business Continuity vs Disaster Recovery: A Guide to Key Differences

Business continuity is like an umbrella, covering every aspect of your business that could be impacted by disruptions – not just technology. Think of it as the master plan that keeps your entire operation functioning when faced with challenges. In contrast, IT disaster recovery is more specific; its focus lies in restoring systems, applications and data after an interruption occurs in tech infrastructure due to any number of causes – natural disasters, cyber-attacks or human error. The first major difference between these two concepts comes down to their scope. While business continuity covers all areas affected by potential disruptions, IT disaster recovery focuses on ensuring technological infrastructures remain functional following crises. Secondly, they have different end goals: while business continuity aims at maintaining essential functions across the organization during a crisis situation till normalcy returns; IT disaster recovery’s objective is getting systems back up and running post-interruption. A third distinction lies within timeframes: A Business Continuity Plan often has longer-term solutions compared to quicker response times expected from an effective Disaster Recovery Plan.


Unlocking the power of multi-cloud

In the era of digital transformation and widespread cloud migration, ensuring robust data security has become a paramount concern for enterprises. The introduction of regulations, such as the Digital Personal Data Protection Act 2023, extends the scope of compliance to smaller businesses, emphasizing the need for comprehensive data protection strategies. End-to-End Data Security Platforms: To address the evolving landscape of data security, businesses are advised to adopt full end-to-end data security platforms. These platforms serve a multifaceted role, helping organizations discover, protect, monitor, and respond to threats across on-premises and cloud environments. Structured and Unstructured Data Management: Platforms should enable the discovery and classification of both structured and unstructured data, providing a comprehensive view of data assets. This capability is crucial for effective data management and compliance efforts. Continuous Monitoring for Risk Mitigation: Implementing continuous monitoring practices is essential for reducing the risk of data breaches. This involves vigilant oversight of data access across on-premises and multiple cloud environments.


Shadow IT use at Okta behind series of damaging breaches

Okta CISO David Bradbury said: “The unauthorised access to Okta’s customer support system leveraged a service account stored in the system itself. This service account was granted permissions to view and update customer support cases. “During our investigation into suspicious use of this account, Okta Security identified that an employee had signed into their personal Google profile on the Chrome browser of their Okta-managed laptop. “The username and password of the service account had been saved into the employee’s personal Google account. The most likely avenue for exposure of this credential is the compromise of the employee’s personal Google account or personal device,” he said. Bradbury added: “We offer our apologies to those affected customers, and more broadly to all our customers that trust Okta as their identity provider. We are deeply committed to providing up-to-date information to all our customers.” Okta said its investigation had been complicated by a failure to identify file downloads in customer support vendor logs. 


Getting Aggressive with Cloud Cybersecurity

The best way to get started is by evaluating vendors that offer proactive cloud security tools and determining their capabilities, Dalling advises. He also suggests reviewing the existing cloud-native inventory and security techniques. “Work with your organization’s security operations center to determine the most effective way to integrate a proactive cloud security tool into their monitoring and incident response workflows,” Dalling adds. By adopting a proactive cloud security approach, organizations can safeguard themselves against security threats, ensure compliance, and increase customer trust, says Ravi Raghava, vice president of cloud solutions at technology integrator SAIC via email. “This approach is often more cost effective than dealing with the aftermath of a security breach, which can result in substantial financial and reputational losses.” He notes that business partners are more likely to trust organizations that prioritize the protection of their data through proactive security steps.


Lessons From 100+ Ransomware Recoveries

Your data retention policy is how long you keep data for regulatory or compliance reasons, and how you remove it when it’s no longer needed. Ransomware attackers have evolved their methods. They know you are less likely to pay out if you can quickly switch over to Disaster Recovery systems. They are now delaying detonation of ransomware to outlast typical retention policies. This is the limitation of DR solutions. While they are the fastest way to recover, they have a limited number of versions or days you can recover to. For one of our manufacturing customers – using both our BaaS and DRaaS products – the ransomware was present on their systems for around three months. That meant that every DR recovery point was compromised, and we had to recover from backups. The Recovery Time Objective (RTO) was a day. We recovered from backups, so it took longer than DR but relatively speaking, it was a fast recovery. The Recovery Point Objective (RPO), however, was from three months prior. The challenge that the organisation then faced was how to re-create that lost data. 


Exploring the global shift towards AI-specific legislation

It is vital that the public – but moreover, all stakeholders – be involved in discussions around AI. The technology companies developing AI, for example, are likely the best placed to understand the technology fully and can help guide any such discussion. Those organizations deploying the technology must also be closely involved, as they have a particular viewpoint to offer. Governments also need to be a part of the discussion. The position of various nations can offer value and help steer the decision-making of all those governments represented in this context. Finally, let’s not forget the general public, the individuals whose data will likely be processed by the technology. All play valuable yet different roles and will come with different viewpoints that should be aired and considered. ... Legislation or any form of regulation is often seen as restrictive: by its very nature, it comprises a set of rules that govern. That is often interpreted as “restrictive” and hinders development, innovation, and technological advancement in this context. That is a generalist, simplistic, and somewhat dismissive view.


The 10 Biggest Cyber Security Trends In 2024 Everyone Must Be Ready For Now

With the work-from-home revolution continuing, the risks posed by workers connecting or sharing data over improperly secured devices will continue to be a threat. Often, these devices are designed for ease of use and convenience rather than secure operations, and home consumer IoT devices may be at risk due to weak security protocols and passwords. The fact that industry has generally dragged its feet over the implementation of IoT security standards, despite the fact that the vulnerabilities have been apparent for many years, means it will continue to be a cyber security weak spot – though this is changing. ... Two terms that are often used interchangeably are cyber security and cyber resilience. However, the distinction will become increasingly important during 2024 and beyond. While the focus of cyber security is on preventing attacks, the growing value placed on resilience by many organizations reflects the hard truth that even the best security can’t guarantee 100 percent protection. Resilience measures are designed to ensure continuity of operations even in the wake of a successful breach. 


Andrew McAfee – ‘Human beings are chronically overconfident’

All of us, as human beings, are chronically overconfident. It’s the most common cognitive bias. That means that your brain children are going to be very, very dear to you, to the point that you’re probably unable to see the holes and the flaws. So that’s a problem. The solution is other people. This is how science works. This is why I describe one of the great geek norms is simply as “science”. Science is really subjecting your ideas to the scrutiny of other people, and then having evidence-based discussions about the merits of those ideas. Is this good? Is this correct or not? One thing you can absolutely start doing is being a little less fond of your own ideas and stress testing those ideas early and often with other people. Another thing we can do is acknowledge other people’s good ideas. Just start saying, “That’s a really good idea, thanks. I hadn’t thought of that. Maybe we should take a different approach here.” Those kinds of statements are super powerful, especially when they are coming from leader in an organisation, because as humans we are wired to take are cues from the people who have high status in an organisation. especially coming from leaders in an organisation. 


IT leader’s survival guide: 8 tips to thrive in the years ahead

With so many disruptive technologies emerging at once, and IT leaders pulled in to solving so many more business challenges, it’s easy to get caught up in the fervor. But in addition to embracing change, IT leaders need to develop a multifaceted approach to navigating current technology and business challenges, says Sanjay Srivastava, chief digital strategist at Genpact. “IT leaders need to adapt by adopting a holistic approach that focuses on resilience, agility, diversification, and collaboration,” Srivastava says. “In this evolving IT investment landscape, the definition of risk has not changed, but the timeframe for response has shortened.” ... It can be difficult to adapt quickly as technology advances, while working to comply with varying regulations across state lines and borders. “The challenge is that the technology footprint — and our understanding of potentials and pitfalls — is still maturing, for instance with generative AI. It’s understandable and expected that regulations will evolve, and working through the changes coming in an otherwise long-term tech stack will be key to getting it right,” he says.


Empowered Agile Transformation – Beyond the Framework

The Executive team could be working 10 to 20 years out of date, because their expertise and experience that got them to their current position has lost its relevance in a world of accelerated change. Their approach can be to apply past experience to current problems. Their 20-year-old solutions are incompatible with contemporary problems. They need to retrain to adopt flexible systems that adapt to new challenges. Otherwise, the workers are constrained by the level of understanding of group executives, and progress is inhibited. They are impeding their teams’ potential. We have all the tools to work contemporaneously today. We have the technology, tools, and experience to leverage agility in delivering value. It is now the executive leaders and company boards fighting the new way; a more collaborative way to generate value for businesses and their customers. The solution is to understand their current customers’ problems, and identify threats to their business models, while gaining the skills and competencies to apply contemporary ways of working.



Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart

Daily Tech Digest - November 05, 2023

Less Code Alternatives to Low Code

Embracing a “minimalist coding” philosophy is foundational. It’s anchored in a gravitation toward clarity, prompting you to identify the indispensable elements in your code, and then discard the rest. Is there a more succinct solution? Can a tool achieve this outcome with less code? Am I building something unique and valuable or rehashing solved problems? Every line of code must be viewed for the potential value it delivers and the future burden it represents. Reduce that burden by avoiding or removing code when you can and leveraging the work of others. ... Modern frameworks offer a significant enhancement to development productivity, primarily by reducing the amount of code written to perform common tasks. Additionally, the underlying code of the framework is tested and maintained by the community, alleviating peripheral maintenance burdens. The same goes for code generators; they’re not merely about avoiding repetitive keystrokes, but about ensuring that the generated code itself is consistent and efficient. 


Software Deployment Security: Risks and Best Practices

Blue-Green Deployment is a release management strategy designed to reduce downtime and risk by running two identical production environments, known as Blue and Green. At any time, only one of these environments is live, with the live environment serving all production traffic. The primary security implication of the Blue-Green deployment is the risk of data inconsistency during the switchover. If not properly managed, sensitive data could be exposed, lost or corrupted. Furthermore, because two environments are maintained, security measures must be duplicated, potentially leading to inconsistencies and vulnerabilities if not properly managed. ... Canary deployment is a strategy where new software versions are gradually rolled out to a small subset of users before being deployed to the entire infrastructure. This strategy allows teams to test and monitor the performance of the new release in a live environment with less risk. Canary deployment can potentially expose new software versions to a smaller user base, potentially exposing vulnerabilities before a full-scale release. If a vulnerability is exploited during this stage, it could lead to a security breach affecting a subset of users. 


It’s time to take your genAI skills to the next level

The workforce of the future will learn AI in school and during the next 15 years, each successive generation of graduates will likely have much stronger AI kung fu than the last. In fact, my own son owns a Silicon Valley based startup called Chatterbox, which exists to teach AI literacy to kids as young as eight years old. Learning AI at that age is unimaginable to adults currently in the workforce. Young workers entering the workforce will have a vastly superior knowledge of, and ability with, AI than the workforce that went to school before the LLM-based genAI revolution of 2022 and 2023. That’s why one of the smartest things you can do now, regardless of your specific occupation, is to get very serious about learning a lot more about genAI. “Prompt engineering” — the ability to use words to get output from genAI tools — is the skill of the year. But it’s only a matter of time before basic proficiency in prompt engineering becomes commonplace and banal. It’s important to set yourself apart from the crowd by going further and really studying how generative AI works, its limitations and potentialities, and the ethical and legal issues are around its output.


Why digital banking is a crucial financial literacy skill for kids

By starting early and providing guidance, parents and educators play a crucial role in helping children develop strong financial literacy skills. Digital banking not only enables children to understand the mechanics of money but also fosters a healthy relationship with finances. For instance, some innovative neobanks in India are currently providing prepaid cards that are exceptionally user-friendly and intuitive. These cards offer a unique opportunity for children to develop crucial financial literacy skills, such as prudent money management, efficient budgeting, and smart savings habits. ... The positive impact of early financial education, including digital banking literacy, on long-term financial well-being cannot be overstated. Introducing children to digital banking at a young age provides them with the knowledge and skills needed to make informed financial decisions throughout their lives. It not only equips them with the tools to navigate the cashless economy effectively but also fosters financial independence, responsibility, and resilience in the face of evolving financial challenges.


Mastering a multi-cloud environment

It is essential to understand the challenges that exist while creating a robust multi-cloud architecture. You need to incorporate the right set of tools and technologies to support workload placement across diverse platforms and services. A solid operating model to effectively manage multi-cloud use is imperative – breaking it down into process security, technology, financial operations and people and skills. One of the keys is aligning IT service management with your multi-cloud operating model – implementing the right technology to effectively operate, manage, monitor and secure resources and services among providers – from data management, governance and security to vendor licenses, contracts and more. ... In today’s fast-changing and threat-laden environment, a new approach to resilience is indispensable – one that helps ensure your ability to ‘bounce back’ quickly from disruptions and maintain application availability. New functional capabilities and skills to embed resilience through design is the way forward and it will likely require businesses to give resilience greater priority as they invest in innovation.


Do we have enough GPUs to manifest AI’s potential?

The current production and availability of GPUs is insufficient to manifest AI’s ever-evolving potential. Many businesses face challenges in obtaining the necessary hardware for their operations, dampening their capacity for innovation. As manufacturers continue ramping up GPU unit production, many companies are already being hobbled by GPU accessibility. According to Fortune, OpenAI CEO Sam Altman privately acknowledged that GPU supply constraints were impacting the company’s business. ... Exploring alternative hardware to power AI applications presents a viable route for organizations striving for efficient processing. Depending on the specific AI workload requirements, CPUs, field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) may be excellent alternatives. FPGAs, which are known for their customizable nature, and ASICs, specifically designed for a particular use case, both have the potential to effectively handle AI tasks. However, it’s crucial to note that these alternatives might exhibit different performance characteristics and trade-offs.


The state of API security in 2023

Only 38% of organizations have solutions that enable them to understand the context between API activities, user behaviors, data streams, and code execution. In hyper-connected digital ecosystems, understanding this data is crucial. An anomaly in user behavior or a suspicious data flow might be early indicators of a breach attempt or a vulnerability exploitation. Moreover, the capability to tailor security responses based on dynamic threat parameters is indispensable. While generalized security protocols can thwart common threats, customized defenses based on threat actors, compromised tokens, IP abuse velocity, geolocations, IP ASNs, and specific attack patterns can be the difference between a repelled threat and a security breach. Yet most organizations do not have this capability. Lastly, companies continue to overlook the need to monitor and understand the communication patterns between API endpoints and application services. An API might be functioning as intended, but if its communication pattern is anomalous or its interactions with other services are unexpected, it could be an indicator of underlying vulnerabilities or misconfigurations.


AI Safety? Rishi Sunak is all in for Elon Musk's work-free fantasy

“There will come a point where no job is needed,” Musk said. “You can have a job if you want to have a job, for personal satisfaction, but the AI will be able to do everything.” In this world, everyone would have what they wanted: “Not a universal basic income, we'll have universal high income.” Musk didn’t give any ideas on how that world will appear, either because he didn’t think he had to, or he didn’t want to face up to the idea that billionaires might have to learn to share. Instead, he cited the Civilisation science fiction novels of Iain Banks, which really just do the same thing better, placing people in a future quasi-Utopia, without giving any suggestion how society transferred from capitalism to a world of freely available high-tech. The real frustration of the AI Safety Summit, on display in the Musk-Sunak show, is that knowledge is power. Musk and the tech billionaires have both, while our elected representatives have neither [even if Sunak and his wife are personally close to billionaire status themselves].

Digital risk: Time to merge cyber security and data privacy

Taking an integrated business approach to managing digital risk delivers a number of key benefits to organisations. Firstly, it can help to bring forward digital transformation initiatives because the data classification and compliance that companies are undertaking across the business for various purposes is aligned and coordinated. Secondly, a digital risk function that conducts comprehensive assessments of third-party and supply chain digital risk is better positioned to ensure that risk is considered across the organisation. One way to do this is by pre-approving vendors from a risk perspective. “Businesses can digitally transform quicker if they do the supplier approval process up front,” says James Arthur, Partner, Head of Cyber Consulting, Grant Thornton UK. “It’s a lot easier to do this if you have a single digital risk function that proactively assesses cyber security and privacy risk together.” Thirdly, businesses continue to use new technologies to seek out commercial advantage, meaning their approach to data privacy and cyber security also needs to continually evolve, to address new threats and vulnerabilities. 


Where Does Cybersecurity Fit Into the Acquisition Process?

Acquiring and integrating an outside company also means inheriting a brand-new set of cybersecurity risks -- both direct and third-party. “If we make an acquisition, a lot of our customers will request to gain some understanding of the security of the company [we] acquired,” Huber explains. How will a company manage those newly acquired risks? Answering that question takes time and comes with a learning curve. Due diligence plays a big role in uncovering those risks, but the possibility that an unknown risk will emerge following the closing of a deal is almost certain. “I think that is always going to happen,” says Huber. “It’s not [a challenge] you can really plan for other than knowing that something’s going to happen.” Acquisitions can take months or quarters from deal consideration to closing. The first part of that process involves vetting the potential fit from business and technical perspectives. Once an acquisition appears to be a promising fit, the acquiring organization must go through its entire due diligence playbook to understand the opportunities and risks associated with its target.



Quote for the day:

"If it wasn't hard, everyone would do it. The hard is what makes it great." -- Tom Hanks

Daily Tech Digest - November 04, 2023

AI’s Role In Payments 3.0: Balancing Innovation With Responsibility

At the core of the responsible use of AI is protecting individuals and their data. After all, one of the most personal things to people is their financial information. It’s critical for businesses to work with data experts who know how to keep customers’ private information safe while appropriately using payment data to enable personalized service. ... One of AI’s greatest advantages is its ability to scan and analyze large amounts of data and suggest or implement improved experiences related to payments. One could argue that AI might be able to handle these tasks better than humans, not only because it can do so quickly, but because it eliminates the biases humans can impose. However, is that true in practice? Unfortunately, not always. Responsible AI depends upon first choosing the most accurate, audited and unbiased data sources available. Then it must use a system of audits during the development and implementation of the machine learning model—and frequently thereafter—to detect and correct for inappropriate biased decision-making. Another compelling reason to weed out AI bias is compliance.


Decoding Kafka: Why It’s Worth the Complexity

First off, learning Kafka requires time and dedication. Newcomers might take a few days or weeks to grasp the basics and months to master advanced features and concepts. In addition, you need to constantly monitor and learn from the cluster’s performance as well as keep up with Kafka’s evolution and new features being released. Setting up your Kafka deployment can be challenging, expensive and time-consuming. This process can take anywhere between a few days and a few weeks, depending on the scale and the specifics of the setup. You may even decide that a dedicated platform team will need to be created specifically to manage Kafka. ... Kafka is more than a simple message broker. It offers additional capabilities like stream processing, durability, flexible messaging semantics and better scalability and performance than traditional brokers. While its superior characteristics increase complexity, the trade-off seems justified. Otherwise, why would numerous companies worldwide use Kafka? 


The Tech Gold Rush Is Over. The Search for the Next Gold Rush Is On

The tech industry will probably shrink, too. It will still be an important part of the economy, but employing fewer people and offering more normal returns. Now the question is where young people seeking wealth will turn next. From the Age of Discovery to the actual California Gold Rush to Snapchat, it is human nature to chase fortune where you can. It is a large part of what moves an economy forward. But these modern settings for this quest — the finance industry and the tech industry — are unusual in that they attracted many people who expected to get rich even if they lacked two things normally associated with extreme wealth creation: creating lots of value for the economy, and taking smart risks. The fact that anyone thinks things should be different is simply the result of the historically low interest rates of the last few decades, which helped make capital incredibly cheap. The price of capital is the price of risk, and if it is effectively zero, then it stands to reason that easy fortunes can be made risk-free.


To Improve Cyber Defenses, Practice for Disaster

The primary challenge organizations face when executing crisis simulations is determining the right level of difficulty, says Tanner Howell, director of solutions engineering at RangeForce. "With threat actors ranging from script kiddies to nation-states, it's vital to strike a balance of difficulty and relevance," he says. "If the simulation is too simple, it won't effectively test the playbooks. Too difficult, and team engagement may decrease." Walters says organizations should expand simulations beyond technical aspects to include regulatory compliance, public relations strategies, customer communications, and other critical areas. "These measures will help ensure that crisis simulations are comprehensive and better prepare the organization for a wide range of cybersecurity scenarios," he notes. Taavi Must, CEO of RangeForce, says organizations can implement some key best practices to improve team collaboration, readiness, and defensive posture. "Managers can perform business analysis to identify the most applicable threats to the organization," he says. "This allows teams to focus their already precious time around what matters most to them."


Going All-in With Evergreen Cloud Adoption Brings Its Own Challenges

An evergreen IT strategy on the other hand, keeps a cloud-based system online throughout to allow businesses to conduct smaller, more regular updates either weekly, biweekly, or monthly. These can be planned at optimum times to avoid downtime, support business continuity, and mitigate against lost profits. Disruption during transformation is always a risk, so incremental changes also mean pockets of disruption can be easier to isolate and resolve. An MSP expert has the time and knowledge necessary to craft a bespoke strategy for transformation, which builds-in operational resilience and offsets potential downtime. For instance, round-the-clock help desks mean businesses can access support and advice whenever they need it, to allow the fastest resolution of problems during the onboarding process. ... MSPs don’t just offer support during implementation, they have a wealth of systems knowledge that businesses can exploit to assist with automation updates, faults, and internal process reviews. For industries such as the manufacturing sector, downtime can account for 5% – 20% of working time, with lost productivity costing up to £180 billion a year. 


Ongoing supply chain disruption continues to take its toll

Firstly, there appears to have been little sign of easing on some supply chains, with 86 percent of respondents stating that they had experienced supply chain volatility over the past year, slightly down on the same level who reported the same in winter 2022 but similar to the levels who reported the same one year ago. Once again, the professionals responsible for delivering new data center facilities to the market have been significantly impacted by the volatility in the supply chain. Our survey of developer/investor respondents revealed that some 91 percent of them confirmed being significantly affected. This figure represents an increase from the 82 percent reported six months ago and 83 percent recorded a year ago. Notably, among our DEC stakeholders, the impact was more pronounced, with 93 percent expressing their strong agreement, compared to 70 percent in Q4 2022. Amongst our service providers, there is still a high level of agreement regarding this disruption albeit with some marginal easing; 92 percent stated that they had experienced such supply chain problems, down from 97 percent reported six months ago.


What Does a Data Scientist Do?

As the field of Data Science continues to evolve, so too does its potential to transform industries and revolutionize decision-making processes. While descriptive and predictive analytics have long been utilized to gain insights from historical data and make informed predictions about future outcomes, the current emphasis is on prescriptive analytics. Prescriptive analytics takes data analysis to a whole new level by not only providing insights into what might happen but also offering recommendations on how to make it happen. By leveraging advanced algorithms, machine learning techniques, and artificial intelligence, data scientists can now go beyond simply understanding patterns and trends. They can provide actionable suggestions that optimize decision-making processes. The impact of prescriptive analytics is far-reaching across numerous sectors. For example, in healthcare, it can help physicians determine personalized treatment plans based on patient data. In supply chain management, it can optimize inventory levels and streamline logistics operations. In finance, it can assist with risk management strategies.


From automated to autonomous, will the real robots please stand up?

Today's robots, despite not being as versatile as Mr. Data, are generally quite useful and functional. These include industrial robots, medical robots, military and defense robots, domestic robots, entertainment robots, space exploration and maintenance robots, agricultural robots, retail robots, underwater robots, and telepresence robots that help people participate in an activity from a distance. My personal interest has been focused on robots available and accessible to makers and hobbyists, robots that can empower individuals to build, design, and prototype projects previously only feasible by those with a shop full of fabrication machinery. I'm talking about 3D printers, which build up objects from layers of molten plastic; CNC devices, which often cut, carve, and remove wood or metal to create objects; laser cutters, which are ideal for sign cutting, engraving, and fabricating very detailed parts and circuit boards; and even vinyl cutters, for carefully cutting light, flexible material in intricate patterns. These machines are programmed using CAD software to define -- aka, design -- the object being built.


The Misleading Use of the ‘Technical Debt’ expression

The good software is the one that makes users happy and has the capacity to be improved when necessary. However some tech debts emerge from this need to evolve the product. I believe when the software does a good job in solving some problem, it probably needs to scale — as more people would use it, more features would be necessary and so on. When that happens, some characteristics of your software will need to change and something that usually works just fine will become a debt — If you don’t care about it soon enough — your great software will be called a legacy one. Software and its architecture should be evolutive. It’s important to map tech opportunities because it can become a technical debt in future but we don’t need to apply all immediately if it doesn’t fit the current scenario. I’ve seen many managers saying ‘From now on let’s never do a feature with debts’ — that’s for me a very incomprehension of the reality in software development. You will need to make debts to achieve the delivery time and have success in software and your business, but also debts will always emerge from a change of scenario of your product. 


What is a digital twin? And why is everyone suddenly building one?

Digital twins of vehicles, factories, and cities already exist, but could there ever be a digital twin of you? It’s very possible, especially as health technology, biosensors, and AI’s predictive ability improves. It’s easy to envision a day when every person has a digital health twin that mirrors our physical and genetic makeup and lifestyle, a doppelgänger we can feed hypothetical inputs to and see what effects they may have on our bodies. For example, a digital health twin could show us how adhering to a vegetarian diet would impact our specific body over the next 20 years. Or our digital health twin might be able to reveal the effect that physical stresses on our body will have as we continue to age—what another 10 years of sitting at our desk at work will do to our spine, for example. A digital twin may even be able to show us which treatment option for a disease is better for us by revealing how our unique body would react if we chose to undertake one treatment plan instead of another, say medication versus surgery. All this sounds far-fetched, but the digital twinning of organs and entire bodies is already well underway in the healthcare industry.



Quote for the day:

“Failure defeats losers, failure inspires winners.” -- Robert T. Kiyosaki