Daily Tech Digest - January 16, 2024

Why Pre-Skilling, Not Reskilling, Is The Secret To Better Employment Pipelines

In a landscape where the relevance of skills evolves, Zaslavski says that organizations should focus on selecting and advancing individuals based on their potential for learning skills like critical thinking and resiliency, instead of focusing on hard skills like coding. ... “By concentrating on these fundamental elements, as opposed to current technical proficiency or past work history, organizations position themselves with an agile and future-ready workforce. In this light, pre-skilling should be an integral part of employers’ talent strategy pre and post-hiring, from sourcing and recruiting to career pathing and employee engagement.” ... She points to areas like understanding if a potential or existing employee has the EQ and social skills needed to perform as part of a group. Or whether they have the curiosity and analytical intelligence needed to learn new hard skills as well as the ambition and work ethic to achieve results. “When people have learning ability, drive, and people skills, they will probably develop new skills faster than others,” she says.


Agile is a concept we all continuously talk about, but what is it really?

Empiricism, teams, user stories, iterations; they are all examples of tools that we use in Agile, but they are not its purpose. Agile is about empowering people to take control of their environment and give them complete freedom to discover how to use available tools in the most effective way. And this applies to the why too. People adopt Agile to increase efficiency, transparency, velocity, predictability, quality. But again all these are a result of Agile, not its goal. It is the mindset that makes it all possible. That is why it is “People and interactions above processes and tools”. To illustrate this, think about empiricism itself. Try introducing empiricism into an organisation mired in a culture of fear and control, and it doesn’t work, no matter what you do. You can’t force empiricism. People are too busy evading blame and manipulating information. Think about it, how often do people complain that the retrospective doesn’t deliver anything? Retrospectives where people just complain and nothing changes? 


What Will It Take to Adopt Secure by Design Principles?

What does the future of secure by design adoption look like? CISA is continuing its work alongside industry partners. “Part of our strategy is to collect data on attacks and understand what that data is telling us about risk and impact and derive further best practices and work with companies, and really other nations, to adopt these principles,” Zabierek shares. International collaboration on secure by design is reflected not only in this CISA initiative but also the Guidelines for Secure AI System Development. CISA and the UK’s National Cyber Security Centre (NCSC) led the development of those guidelines, and 16 other countries have agreed to them. But like the Secure by Design initiative, this framework is also non-binding. A software manufacturer’s timeline for adopting secure by design principles will depend on its appetite, resources and the complexity of its products. But the more demand from government and consumers, the more likely adoption will happen. Right now, CISA has no plans to track adoption. “We're more focused on collaborating with industry so that we can understand best practices and recommend further better guidelines,” says Zabierek.


Mastering the art of motivation

Once you’ve helped employees connect their dots, the best way to further motivate them is also the cheapest, easiest, and has the fewest unintended consequences. Compliment them on a job well done, whenever they’ve done a job well enough to be worth noting. Sure, there are wrong ways to use compliments as motivators. First and foremost the employee you’re complimenting must value your opinion. If they don’t they’ll write off your compliment as just so much noise. Second, a compliment from you should not be an easy compliment to earn. “I really like your belt,” isn’t going to inspire someone to work inventively and late. Third, with few exceptions compliments should be public. There’s little reason for you to be embarrassed about being pleased with someone’s efforts. With one caveat: Usually you’ll have one or two in your organization who routinely perform exceptionally well, but also one or two who are plodders — good enough and steady enough to keep around; not good enough or steady enough to earn your praise. Find a way to compliment them in public anyway — perhaps because you prize their reliability and lack of temperament.


Do you need GPUs for generative AI systems?

GPUs greatly enhance performance, but they do so at a significant cost. Also, for those of you tracking carbon points, GPUs consume notable amounts of electricity and generate considerable heat. Do the performance gains justify the cost? CPUs are the most common type of processors in computers. They are everywhere, including in whatever you’re using to read this article. CPUs can perform a wide variety of tasks, and they have a smaller number of cores compared to GPUs. However, they have sophisticated control units and can execute a wide range of instructions. This versatility means they can handle AI workloads, such as use cases that need to leverage any kind of AI, including generative AI. CPUs can prototype new neural network architectures or test algorithms. They can be adequate for running smaller or less complex models. This is what many businesses are building right now (and will be for some time) and CPUs are sufficient for the use cases I’m currently hearing about. CPUs are more cost-effective in terms of initial investment and power consumption for smaller organizations or individuals who have limited resources. 


How to create an AI team and train your other workers

Building an genAI team requires a holistic approach, according to Jayaprakash Nair head of Machine Learning, AI and Visualization at Altimetrik, a digital engineering services provider. To reduce the risk of failure, organizations should begin by setting the foundation for quality data, establish “a single source of truth strategy,” and define business objectives. Building a team that includes diverse roles such as data scientists, machine learning engineers, data engineers, domain experts, project managers, and ethicists/legal advisors is also critical, he said. “Each role will contribute unique expertise and perspectives, which is essential for effective and responsible implementation,” Nair said. "Management must work to foster collaboration among these roles, help align each function with business goals, and also incorporate ethical and legal guidance to ensure that projects adhere to industry guidelines and regulations." ... It's also important to look for people who like learning new technology, have a good business sense, and understand how the technology can benefit the company.


Data is the missing piece of the AI puzzle. Here's how to fill the gap

Companies looking to make progress in AI, says Labovich, must "strike a balance and acknowledge the significant role of unstructured data in the advancement of gen AI." Sharma agrees with these sentiments: "It is not necessarily true that organizations must use gen AI on top of structured data to solve highly complex problems. Oftentimes the simplest applications can lead to the greatest savings in terms of efficiency." The wide variety of data that AI requires can be a vexing piece of the puzzle. For example, data at the edge is becoming a major source for large language models and repositories. "There will be significant growth of data at the edge as AI continues to evolve and organizations continue to innovate around their digital transformation to grow revenue and profits," says Bruce Kornfeld, chief marketing and product officer at StorMagic. Currently, he continues, "there is too much data in too many different formats, which is causing an influx of internal strife as companies struggle to determine what is business-critical versus what can be archived or removed from their data sets."


3 ways to combat rising OAuth SaaS attacks

At their core, OAuth integrations are cloud apps that can access data on behalf of a user, with a defined permission set. When a Microsoft 365 user installs a MailMerge app to their Word, for example, they have essentially created a service principal for the app and granted it an extensive permission set with read/write access, the ability to save and delete files, as well as the ability to access multiple documents to facilitate the mail merge. The organization needs to implement an application control process for OAuth apps and determine if the application, like in the example above, is approved or not. ... Security teams should view user security through two separate lenses. The first is the way they access the applications. Apps should be configured to require multi-factor authentication (MFA) and single sign-on (SSO). ... Automated tools should scan the logs and report whenever an OAuth-integrated application is acting suspiciously. For example, applications that display unusual access patterns or geographical abnormalities should be regarded as suspicious. 


Cloud cost optimisation: Strategies for managing cloud expenses and maximising ROI

Instead of employing manual resources, streamlining cloud optimisation through automation could bring enhanced resource savings to the table. The auto-scaling program offered by Amazon Web Services (AWS) is a shining example of how firms can effectively streamline their cloud optimisation in a short time. The program also enables swift optimisation in response to the changing resource requirements of systems and servers. ... At the planning stage, firms need to justify the cloud budget and ensure that unexpected spending is reduced to the minimum. The same approach has to be followed in the building, deployment, and control phases so that any unexpected rise in budgets can be adjusted promptly without throwing the entire financial control into a tizzy. All these steps will help organisations develop a culture of cost-conscious cloud adoption and help them perform optimally while keeping costs in check. ... Incorporating cloud cost optimisation tools is a strategic approach for organisations to streamline expenditures and enhance ROI. 


Pull Requests and Tech Debt

The biggest disadvantage of pull requests is understanding the context of the change, technical or business context: you see what has changed without necessarily explaining why the change occurred. Almost universally, engineers review pull requests in the browser and do their best to understand what’s happening, relying on their understanding of tech stack, architecture, business domains, etc. While some have the background necessary to mentally grasp the overall impact of the change, for others, it’s guesswork, assumptions, and leaps of faith….which only gets worse as the complexity and size of the pull request increases. [Recently a friend said he reviewed all pull requests in his IDE, greatly surprising me: first I’ve heard of such diligence. While noble, that thoroughness becomes a substantial time commitment unless that’s your primary responsibility. Only when absolutely necessary do I do this. Not sure how he pulls it off!] Other than those good samaritans, mostly what you’re doing is static code analysis: within the change in front of you, what has changed, and does it make sense? You can look for similar changes, emerging patterns that might drive refactoring, best practices, or others doing similar.



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley

Daily Tech Digest - January 15, 2024

Authentication is more complicated than ever

Even if posture is improved and stronger forms of MFA are invoked at login, attackers will constantly be looking for new holes to exploit. Therefore, it's important to put in place detection logic and checks for compromise. Ideally, detections should target known attack techniques, but also leverage ML/AI algorithms to detect anomalous or novel suspicious behavior. For example, knowing historical access patterns can highlight when credentials suddenly attempt access from a new device or location. Put differently, authentication can no longer be only about authentication. The decision to validate a credential must be more than a question of the right password and MFA. It must include the context and conditions of the request, checked and confirmed by policy each time. When identity-based attacks are detected, automated responses should be invoked. This can mean stepping up authentication requirements, revoking access, quarantining an identity until the situation is resolved, or executing more complex responses.


The Importance of Human-centered AI

Creating a functional and reliable AI requires a combination of domain and data science expertise with design acumen.Domain experts are particularly important when developing AI for the legal sector, as legal operations professionals, attorneys, and others bring highly valuable knowledge when training AI to deliver results for corporate legal departments (CLDs). Data scientists cleanse, analyze, and glean insights from large amounts of data. AI design strategists create systems, design prototypes, and assist in model building, all while focusing on delivering intelligence in a user-centric way. It’s impossible for an AI model to work optimally without all these individuals working together. For instance, a model built just by data scientists might technically work, but it probably won’t be focused on the user or their business needs. Meanwhile, a model created by an AI designer may not have the breadth of insights it could have if a data scientist and domain expert were also involved. It’s this diversity of human talent and perspectives that lays the initial groundwork for everything that organizations want in AI.


Green data centers: efforts to push sustainable IT developments

Modular designs reduce the need for significant infrastructure modifications by enabling the gradual development of data centre capacity. In addition to saving energy, using more energy-efficient servers, storage units, and networking hardware can provide greater scalability by lowering the requirement for extra power and cooling infrastructure. The data centre’s demand for cooling increases with its size and new technologies are adding to better efficiency and energy savings. Along with this, scaling up without consuming more energy is possible with the use of effective cooling techniques like liquid cooling. Optimising resource utilization and maximising scalability may be achieved by putting into practice effective data centre management techniques like load balancing and resource sharing. Server virtualization maximizes efficiency internally, lowering the requirement for physical equipment and energy usage. Real-time monitoring and modification of energy use is made possible by artificial intelligence and machine learning, which makes infrastructure more adaptable and efficient. 


Unravelling the Persistence of Legacy Malware: By Shailendra Shyam Sahasrabudhe

While the term “legacy” may evoke images of outdated systems and forgotten technologies, in the realm of cyber threats, it takes on a more sinister connotation. Legacy malware, often several years old, continues to haunt organizations, primarily due to the shrewd tactics employed by threat actors. Global organizations face a substantial threat due to the lax enforcement of security standards for IoT device manufacturers, exacerbated by the widespread presence of shadow IoT devices within enterprise networks. This significant risk is posed by the targeting of “unmanaged and unpatched” devices by threat actors, who often leverage these vulnerabilities to establish an initial foothold in the targeted environment. These threat actors, operating as de facto businesses, harbour a vested financial interest in extending the shelf life of their malware. This involves the recycling and repackaging of malicious code, coupled with innovative market strategies. Technical manoeuvres such as code recompilation, binary morphing, and the creation of fresh signatures to sidestep traditional antivirus defences are par for the course.


The 3 Paradoxes of Cloud Native Platform Engineering

Given the plethora of DevOps tools on the market, assembling the optimal toolchain can slow everyone down and lead to inconsistent results. The solution: ensure platform engineering teams build an IDP that includes the best set of tools for the tasks at hand. The goal of such a platform is to provide a “golden path” for developers to follow, essentially a recommended set of tools and processes for getting their work done. However, this golden path can become a straitjacket. When this golden path is overly normative, developers will move away from it to get their jobs done, defeating its purpose. As with measuring their productivity, developers want to be able to make their own choices regarding how they go about crafting software. As a result, platform engineers must be especially careful when building IDPs for cloud native development. Jumping to the conclusion that tools and practices that were suitable for other architectural approaches are also appropriate for cloud native can be a big mistake. 


Cloud Computing's Role in Transforming AML and KYC Operations

The biggest advantage is data centralization. Data is not scattered in different systems which allows compliance investigators to get a holistic view of information about a customer in one place and thereby speed the investigation process and decision-making. Cloud platforms allow for seamless storage at very low cost and also enable organizations with a lot more querying and analytical toolsets. This further aids in the compliance investigation process as the AML investigator gets a view of all the transactions and the trends analysis much faster. AML platform providers were also coaxed to shift from typical on-premise solutions to creating cloud-based platforms which could then be mere plug-and-play SaaS solutions for the FIs. These enabled real-time monitoring of transactions thus alerting of any suspicious activity almost immediately. Unified AML platforms on the cloud also allow collaboration across the AML process chain and the overall FI ecosystem. 


15 ways to grow as an IT leader in 2024

Di Maria says having a group of trusted advisors can help CIOs — or any professional — identify and correct deficits as well as hone and build up strengths. She advises CIOs to tap several executives from outside their current organization, including those from other functional areas and industries, so that CIOs can gain from their diverse experiences and perspectives. ... Di Maria also recommends CIOs create an executive brand this year, if they haven’t done so already. “This helps you be a better leader and help you advance, because it has you focus on what you stand for,” she explains. “It helps you focus on how you show up and what you do so you’re more effective in your job. It helps you figure out what you should be doing, what your priorities are, and how what you’re doing provides value in your workplace.” ... As tech leaders, CIOs are instrumental in leading people through that change — and they must be better at it than they’ve been in the past, says Jason Pyle, president and managing director of Harvey Nash US and Canada, an IT recruitment and consultancy firm. “It will come down to navigating all the human elements,” he says.


Flipping the BEC funnel: Phishing in the age of GenAI

Unfortunately, a significant majority of organizations appear ill-prepared to counter these emerging phishing threats. Chief among the concerns facing most organizations today is the record-high cybersecurity workforce gap, with an estimated need for an additional 4 million professionals worldwide to protect digital assets, as reported by ISC2. The same report reveals that nearly half (48%) of organizations today lack the tools and talent to respond to cyber incidents effectively. Furthermore, the ISC2 study shows that today’s cybersecurity professionals are feeling less than confident about the current threat landscape. A staggering 75% of them assert that the present threat landscape is the most formidable they’ve encountered in the past five years, and 45% anticipate that artificial intelligence (AI) will pose their greatest challenge in the next two years. This outlook underscores the urgency for organizations to fortify their cybersecurity defenses and adapt to the rapidly evolving nature of cyber threats. Our analysis found over 8 million phishing attempts successfully evaded native defenses in 2022 alone.


Eye on the Event Horizon

While multifactor authentication is crucial for securing online accounts, SMS OTP is not the most secure form of MFA. Other, more secure methods are more difficult to hack or replicate, making them a safer option for high-risk transactions. Using WhatsApp OTP as a solution to address SMS OTP security issues could be a simple but effective solution as it offers end-to-end encryption and is cheaper than SMS. Single Sign-On via Social Login is a good option for nonfinancial applications. ... It is important to choose the most secure and reliable authentication method to protect against fraud and financial losses. While hardware-based tokens are the most secure option, they can be inconvenient to carry. There are better alternatives available, such as biometric authentication, mobile authentication apps and FIDO standards. An authenticator app - a mobile application - provides an extra layer of security to your online accounts by generating time-based, one-time passwords or TOTPs. These passwords are used for two-factor authentication and help protect your accounts from unauthorized access.


5 ways QA will evaluate the impact of new generative AI testing tools

Several experts weighed in, and the consensus is that generative AI can augment QA best practices, but not replace them. “When it comes to QA, the art is in the precision and predictability of tests, which AI, with its varying responses to identical prompts, has yet to master,” says Alex Martins, VP of strategy at Katalon. “AI offers an alluring promise of increased testing productivity, but the reality is that testers face a trade-off between spending valuable time refining LLM outputs rather than executing tests. This dichotomy between the potential and practical use of AI tools underscores the need for a balanced approach that harnesses AI assistance without forgoing human expertise.” Copado’s Hannula adds, “Human creativity may still be better than AI figuring out what might break the system. Therefore, fully autonomous testing—although possible—may not yet be the most desired way.” Marko Anastasov, co-founder of Semaphore CI/CD, says, “While AI can boost developer productivity, it’s not a substitute for evaluating quality. Combining automation with strong testing practices gives us confidence that AI outputs high-quality, production-ready code.”



Quote for the day:

"Success does not consist in never making mistakes but in never making the same one a second time." --George Bernard Shaw

Daily Tech Digest - January 14, 2024

Quantum mechanics uncovers hidden patterns in the stock market

What does this mean for the stock market? It implies that higher volatility and a slower reversion to equilibrium amplify herding behavior among investors, especially during times of uncertainty and information asymmetry. The study goes further by testing this model with empirical data from the U.S. stock market. Using the growth rate of gross domestic product (GDP) and forecaster uncertainty as indicators for business cycles and economic uncertainty, respectively, they found a positive correlation between the power law exponent and the GDP growth rate, and a negative correlation with forecaster uncertainty. This confirms their theoretical predictions and highlights the role of economic uncertainty in linking business cycles with herding behavior in stock returns. ... “Our study shows that quantum mechanics can be a useful tool to understand the stock market, a complex system with many interacting agents. We hope that our study can inspire more interdisciplinary research that combines physics and finance to explore the hidden patterns and mechanisms of the stock market,” he states.


'We Never Upskill Fast Enough': NTT DATA Services CEO Bob Pryor on mastering change

It's always a challenge, and to be honest, we never upskill fast enough given the myriad of options available. However, we're heavily investing in training, development, and skilling across all levels. Retaining talent involves helping them acquire more advanced technologies and skills in high-demand disciplines. Individuals tend to find greater satisfaction in roles that require complexity over those that are simpler to master. Constantly evolving the mix of skills, technology, and labour is crucial. Take AI, for example—it doesn't eliminate labour; it enhances people's efficacy when working with AI. In healthcare, top oncologists use advanced AI algorithms for diagnosis, medical devices, and treatment. The challenge isn't whether they are displaced by technology but whether we're scaling them fast enough to use the advanced technologies we're investing in and developing. Working effectively with AI involves having people smart enough to ask the right questions—what to create, what questions to ask, and how to interpret language models. 


5 Ways To Upskill As A Leader And Gain Respect From Your Team

Leadership is about building relationships, not task lists. This year, upskill yourself by building these skills to develop a leadership style that inspires cooperation and motivation, not fear. ... Being polite shows the people around you that you respect them, and they are more likely to return the favor. It costs you nothing to be kind. A basic greeting can go a long way, as can asking about your employees’ weekends, family, etc. Remember to say please and thank you. Never interrupt when your employees are talking, and show that you respect their time, work, and ideas. ... Bossing people around doesn’t feel great long term. You know when there’s tension in your office and when people aren’t glad to see you. It’s not good for your mental health to spend nine hours a day (or more) with people who resent your presence. When you tap into your humanity to create better relationships with your employees and become a leader people enjoy working with, not only will you feel more respected as a person, but you’ll likely also enjoy the benefits of a happier workforce, such as higher productivity, better work and even higher profits.


Yes, We're Still Messing Up Hybrid Work. Here's Where Exactly We're Going Wrong.

Hybrid work environments are dynamic, and what works one day may not be effective the next. Managers must be trained to be flexible in their leadership approach, adapting to the varying needs of their team members. This adaptability also means being open to feedback and willing to continuously learn and evolve their management style. It involves understanding the unique challenges and opportunities of managing remote and in-office team members and being adept at creating a cohesive team culture that bridges the physical divide. Honing communication skills is another key focus. In a hybrid setup, clear and inclusive communication is paramount. Managers need to be adept at conveying their messages effectively across various digital platforms, ensuring that every team member, whether remote or in-office, feels equally involved and informed. ... Developing strategies for remote team building is equally important. Hybrid work models can lead to a sense of disconnection among team members.


It’s time to fix flaky tests in software development

Not only do flaky tests threaten the quality and speed of software delivery, they pose a very real threat to the happiness and satisfaction of software developers. Similar to other bottlenecks in the software development process, flaky tests take developers out of their creative flow and prevent them from doing what they love: creating software. Imagine a test passes on one run and fails on the next, with no relevant changes made to the codebase in the interim. This inconsistent behavior can create a fog of confusion, and lead developers down demoralizing rabbit holes to figure out what’s gone wrong. It’s a huge waste of time and energy. By addressing flaky tests, technology leaders can directly improve the developer experience. Instead of getting tangled up in a web of phantom problems that drain their time and energy, developers are able to spend more time on fulfilling tasks like creating new features or refining existing code. When erratic tests are eliminated, the development process runs much more smoothly, resulting in a more motivated and happier team.


Building Cybersecurity Resilience With the Power of Habit

Clear's principles and philosophy, advocating for small yet consistent changes, should resonate deeply with cyberprofessionals. These principles, while not originally intended for use in the cybersecurity realm, can be creatively applied to construct a robust framework for a resilient cybersecurity culture. Clear's principles can be adapted to the cultivation of cybersecurity habits. ... The journey can begin with the fundamentals, for example, the management of cloud access rights. This involves regularly reviewing who has access to what information or resources and why, revoking access rights when an employee changes roles or leaves the organization, and implementing the principle of least privilege, wherein users are given the minimum levels of access necessary to perform their jobs. These minor changes, when consistently applied, can become the building blocks of an enterprise’s cybersecurity framework. The cumulative effect of such microchanges can be surprising. 


Customer Experience Is King, but CIOs Could Do More to Help

The very nature of how customer experience projects get defined and shepherded places IT at the back of the room, as an executor of tasks but not as a strategic leader. Is this bad? Not necessarily, considering that the end business units interacting with the customer ostensibly have expertise in dealing with customers, and are in the best position to know what customers want. However, as technology becomes a more integral element of the selling, informing, fulfillment and servicing of customers, there also is unique expertise that IT brings to the table. It can be invaluable in improving the customer experience, and that can also avert disaster. Being able to sell non-stop, 24/7 to worldwide customers is a major driver of e-commerce, as is the ability to provide customers with self-service options that can reduce internal operational costs for companies. Analytics, which can assess an individual customer or demographic buying habits and anticipate what customers will want to buy next are seen as beneficial. 


Leveraging Chaos Engineering To Test The Resilience Of Distributed Computing Systems

It helps build the resilience of distributed computing systems and improves their ability to withstand unexpected disruptions. Read on to know how. Chaos engineering leverages the chaos theory to achieve this. Further, the chaos theory introduces random and unexpected behavior in a controlled manner to identify system weaknesses. How does it benefit organizations? By enabling them to identify system vulnerabilities even before they actually occur. As a result, an organization can proactively adopt measures to plug potential vulnerabilities and improve system stability. However, developers associated with a premier software development company use an innovative approach to chaos engineering. ... The concept might look similar to stress testing but they are not the same. There are some key differences. For one, the concept leverages the chaos theory to proactively identify system or network issues and correct them. It also tests and corrects all components at the same time. Here, developers associated with a software development company in New York tend to look beyond possible causes and obvious issues. 


Neither ‘Agile’ nor Architecture are Going Anywhere

Want to move the enterprise to little a or big A agile? Want to modernize the technology stack? Implement flex points in subsystems? Integration effectiveness? Harness information for outcomes? Deliver technology services? Event-Driven Architecture? Customer-Centric Design? Manage cross-system compatibility and quality attributes? Handle mergers and acquisitions well? Project/team thinking do not account for these outcomes. The product owner doesn’t understand them and the development lead is focused on speed, simplicity and delivery. They may not understand them either. Architecture connects big outcomes to little decisions. I have seen huge objectives brought low by simple development decisions. ... From the board room to the basement. From idea to outcome. In between operating responsibilities. In between competing business objectives. With partners. With vendors. With an ever changing technology adoption cycle. From finance to legal to customer impacts, it takes a LOT of fascilitation, discussion, decision making and prioritization to deliver a balanced advantageous technology strategy. 


Demystifying Cloud Trends: Statistics and Strategies for Robust Security

The Shared Responsibility Model is a security and compliance framework that defines the responsibilities of cloud service providers (CSPs) and cloud customers for securing every aspect of the cloud environment, including hardware, infrastructure, endpoints, data, configurations, settings, operating system (OS), network controls and access rights. In basic terms, this model helps clarify who is responsible for securing various aspects of the cloud infrastructure, services, and data. The division of responsibilities varies depending on the cloud deployment model. ... Implementing strong IAM practices enforcing the principle of least privilege to restrict access rights for users and systems and regularly reviewing and updating access permissions can have a major positive impact on an organization’s cloud security posture. It’s as simple as granting users and other cloud resources the authorization to access the required resources only to a required extent. Multi-factor authentication (MFA) adds an additional layer of security, ensuring that only authorized users have access to resources and data. 



Quote for the day:

"We become what we think about most of the time, and that's the strangest secret." -- Earl Nightingale

Daily Tech Digest - January 13, 2024

Frenemies to friends: Developers and security tools

Cultural shifts happen when security is built into the developer’s existing flow, as opposed to being injected as its own new stage in the pipeline. Look for points in their process where they are already in “pause” or “edit” mode, like at the Pull Request, where you can surface vulnerabilities and ask for remediation efforts. Doing so can avoid context switching and feelings of being interrupted. Capitalizing on an existing developer pause point can help train your developers to look at security vulnerabilities like functionality bugs, a skill they already have, while also shortening feedback loops. ... Developer-to-developer enablement is key. There is often a feeling of mistrust between engineering and security, but developers share the same interests and have the same priorities. Let individual contributors have an opportunity to educate and enable other individual contributors. If you have had a successful pilot or PoC team, or notice self-motivated folks using the tool proactively, give them space to share their experience with the tool. 


The Joys and Pains of DevOps

DevOps is very much a culture change in the way development, operations and even security work together. Even though DevOps aims to improve this, in many cases, these areas still function in silos. There are times when one area implements something that blocks another; and as a DevOps leader, you’re often in the middle trying to figure out the best path forward while also finding an acceptable middle ground. ... A well-engineered DevOps solution should render the team invisible. That includes both the happy path, when deployments succeed, as well as how well you enable teams to solve their deployment issues. There is also one common element of what makes DevOps rewarding: improving developer experience and business outcomes. Dale Francis, director of product development at Climavision, says the rewards of DevOps come from solving problems, so day-to-day operations become simple and the experience for developers better. In addition, maturing as a DevOps organization also lets everyone focus more on solving business problems, rather than fighting technical issues. 


Why Engineering Is Key To A Flourishing Workplace Culture

If your engineering strategy demands precision but your workplace culture tolerates ambiguity and shortcuts, you won't get anywhere. If your engineering strategy demands accountability but your workplace culture doesn't draw connections between an individual's efforts and the higher goals of the operation, you won't get anywhere. If your engineering strategy demands innovation but your workplace culture rewards risk aversion, you won't get anywhere. ... In an arena as complex and technical as engineering, it's easy to lose sight of the human side. Whether your workplace is in-person, remote or hybrid, it's crucial to create spaces (literal or virtual) where employees feel connected and empowered to ask questions. Trust and creativity flourish in an environment where autonomy and authentic connections coexist. ... Inertia is fatal to engineering. Regularly evaluate and adopt new technologies. Find out what your customers need. Find out what hurdles they're up against. Think three steps ahead so your tech stack supports the evolving needs of your business and the market.


Life's Too Short to Work With Incompatible People

Celebrate failure and learn to give feedback. When you embrace failure, you learn and course-correct more quickly. Failure is a sign you're doing something right. You're testing, learning, flexing your creative muscles and moving on efficiently after hitting a brick wall. You must build a team open to feedback to make the most of your failures for the company's good. Feedback is the mode by which we make positive changes out of failure. The challenge? Feedback makes most people cringe. We associate it with criticism as opposed to growth. ... Clear communication may seem like an obvious necessity on high-performing teams, but it's something that's often taken for granted. Unclear communication can quickly tank a team's efforts. A team that has mastered precise communication, on the other hand, can achieve incredible outcomes quickly. We follow an "open book" mentality at Wistia. On all-hands calls, we share candid information about the state of the company – inclusive of the good and the bad – so everyone has the big picture. 


Researchers demo new CI/CD attack techniques in PyTorch supply-chain

Khan initially found a critical vulnerability that could have led to the poisoning of GitHub Actions’ official runner images. The “runners” are the VMs that execute build actions defined inside GitHub Actions workflows. After reporting the vulnerability to GitHub and receiving a $20,000 bug bounty for it, Khan realized that the core issue he found was systemic and that thousands of other repositories were likely impacted. Since then, Khan and Stawinski found vulnerabilities in the software repositories and development infrastructure of major corporations and software projects and collected hundreds of thousands of dollars in rewards through bug bounty programs. Their “victims” included Microsoft Deepspeed, a Cloudflare application, the TensorFlow machine-learning library, the crypto wallets and nodes of several blockchains, and PyTorch, one of the most widely used open-source machine-learning frameworks. PyTorch was originally developed by Meta AI, a subsidiary of Meta, but its development is now governed by the PyTorch Foundation, an independent organization that operates under the Linux Foundation’s umbrella.


For a Secure Foundation, Health Systems Must Address Technical Debt

We need update network equipment, workstations. We may still even have Windows 2003 and 2008. And hardware is not as expensive as the applications that are on there. So that level of technical debt and competing for those dollars where in healthcare you need to have nice offices and that type of thing. So we’re competing with those, with other projects or capital where other organizations may think of that as just an ongoing IT update expense. ... I might hear this stuff at home occasionally, but it’s the same with IT projects. “Hey, we had an acquisition. We got them up and running. We didn’t take care of their technical debt so we’re assuming that.” We’re going through some of those servers now, it’s like, can we even find anybody that knows anything about it, or is it just everyone’s afraid to turn it off? What I like to say is if you didn’t sit around the right campfire, you don’t know the story. So for me, my job sometimes is just to keep asking those questions: “Who knows something about this server?” Sometimes it comes down to the scream test, but I’ve developed a quality, I call it positive persistence. I just keep asking questions politely until we make progress.


The way forward is to make technology 'human-like': Report

As the world undergoes a massive technological transformation, artificial intelligence (AI) and other disruptive technologies will increasingly adopt a more human-like or "Human by Design" approach, according to a new study published on Wednesday. These technologies becoming more human-like and intuitive for people to use, will increasingly lead to a new era of unprecedented productivity and creativity, said the report, titled 'Accenture Technology Vision 2024: Human by design, how AI unleashes the next level of human potential,' which also emphasizes that enterprises that prepare for this shift now will be the winners in the future. The research further highlights that as human-centric technologies continue to advance, they are becoming easier to interact with and more seamlessly integrated into every aspect of our lives. ... As AI, spatial computing, and body-sensing technologies evolve to imitate human capabilities and become less noticeable, the true focus will be on the people who are empowered with new capabilities to achieve what was once considered impossible.


Expert Insight: Andrew Snow on a landmark GDPR ruling

For organisations, it makes clear beyond all doubt that ignorance isn’t an excuse. In fact, if organisations – or managers within them – plead ignorance to the infringement now, they may face a higher fine than if they had taken responsibility for their actions. For regulators, an important precedent has been set. This ruling has provided them with clear direction on where the line falls when deciding on issuing administrative penalties, including fines. For instance, the EDPB [European Data Protection Board] recently reported on another case, involving the Slovak and Hungarian authorities, where there was a dispute over the ownership. The Hungarian regulator ultimately determined that both parties jointly determined the purposes of processing, so were joint controllers – and as such, breached the GDPR because their agreement failed to document this and, by extension, their respective responsibilities. Given the timing of this decision, it probably wasn’t influenced by the ECJ ruling, but I expect that future cases like this would use the ruling as a precedent.


What Are Digital Twins and How Can They Be Used in Healthcare?

Trayanova’s research is on applying personalized digital twin approaches to clinical decision-making. She aims to improve predictive diagnostics and to predict optimal treatment plans for patients. This is currently being used to treat patients with heart rhythm disorders. At Johns Hopkins, Trayanova and her team can create a personalized digital twin representing the geometry of a patient’s heart. The digital twin includes the heart’s structure; disease remodeling such as damage, fibrosis and inflammation identified through MRI or PET scans; and its electrical wave propagation. When an electrical wave propagates to the heart, it triggers a contraction. However, if a patient has scarring or other damage, the wave will catch in that area and, rather than propagating through the heart, it will recirculate and cause an arrythmia. To treat the arrythmia, the digital twin must accurately represent the damage as well as the electrical activity of each cell in the heart. “Now you have something that dynamically links the heart’s components,” Trayanova says. Using the digital twin, she and her team can send a signal and watch how the electrical wave propagates through the model. 


What will the metaverse mean for business models?

In media and entertainment, the primary model of business has evolved from ownership to subscription. In the past, most people bought CDs and DVDs to build a collection – today, owning vinyl is booming in popularity again. But for the majority of people, the accepted model is accessing songs, films and TV series online and building your own virtual library. The difference is that if you stop paying the subscription, you have nothing. Will it be the same in the metaverse? We’ll have to wait and see. But it’s safe to assume that people will want ownership of their assets without paying a subscription (except for the wallet that protects them). To complicate things, there is the question of what role content from Generative AI will play in metaverse business models. Today, it’s generally accepted that no one owns work created by Generative AI. But won't this change? In fact, this assumption may even be wrong – in the UK for example, the law implies that the creators of the AI platform own anything wholly created by it. 



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - January 12, 2024

Navigating Tomorrow: Becoming an Enterprise of the Future

Preparing for what lies ahead goes far beyond just implementing the right technologies, it is about developing a culture that embraces change with empathy. Cultivating a mindset across the organisation that values innovation, continuous learning, and agility ensures that every employee charges forward with confidence. In times of economic uncertainties and technological advancements, it is crucial that we practice empathy. Naturally, there is some fear that technologies like AI will replace human workers. As such, leaders must help employees understand that technology is here to augment their roles and empower them to spend more time on other valuable tasks. The key to embracing any new technology and providing access at scale is to get everyone in the team on board. Whether greeted with excitement or anxiety, leaders must champion this culture of change by encouraging employees to seek new ways of working while ensuring they remain engaged and valued. Certainly, data-driven decision-making will undoubtedly continue to be the cornerstone of future business attempts. 


The Importance of Enterprise Architecture in the Modern Business Landscape

The field of Enterprise Architecture is constantly evolving, driven by emerging trends and innovations. One of the significant trends is the adoption of cloud computing and hybrid IT environments. Cloud-based solutions offer scalability, flexibility, and cost-efficiency, making them increasingly popular among businesses. Enterprise Architecture helps organizations leverage these technologies by designing architectures that integrate cloud services and on-premises infrastructure, ensuring seamless operations and efficient resource utilization. Another emerging trend is the incorporation of artificial intelligence (AI) and machine learning (ML) in Enterprise Architecture practices. AI and ML technologies enable businesses to automate processes, analyze vast amounts of data, and gain valuable insights. By integrating AI and ML into their Enterprise Architecture frameworks, organizations can enhance decision-making, optimize business processes, and improve overall efficiency. Furthermore, the rise of digital transformation has had a significant impact on Enterprise Architecture. 


Top 8 challenges IT leaders will face in 2024

To guide an organization through uncertainty, IT leaders must help ensure everyone in the company is on the same page, Srivastava says. Instead of playing catch-up, he suggests a proactive approach with clear communication as a guiding principle. “It starts with establishing a clear set of agreed upon initiatives and outcomes for the organization,” he says. “We have to make sure everyone understands what they are doing, why they are doing it, and — most importantly — how success will be measured.” ... Security is a challenge that makes the list of top CIO worries perennially, but Grant McCormick, CIO of cybersecurity company Exabeam, notes a rising need for increased collaboration between IT and security teams to address the issue. “The role of the CIO has recently seen a massive convergence with cybersecurity,” says McCormick. “Regardless of whether or not security reports into the CIO, or another leader within the company, it is in everyone’s best interest to be conscious of the organization’s security posture and to enable IT and cybersecurity to work in a highly synchronized manner.”


Economic Uncertainty Doesn’t Mean Compromising Cybersecurity

This futuristic technology isn’t just something to tap into to enrich individual experiences; it is also to help solve some of society’s most pressing challenges and, most of all, to keep people safe. For cryptocurrencies, where there is estimated to be four times more fraud than in regular fiat payments, technology providers are devising new innovations to stay ahead. New solutions can help customers make informed decisions that protect their business, as well as the entire payments ecosystem. A simple dashboard can provide visibility of crypto spend, transaction volumes and an anti-money laundering risk rating exposure. Through solutions like these, banks and other businesses can earn and, importantly, keep the trust of their customers—on whom their business depends. Trust is fragile. It can be broken in a nanosecond. And as the global financial ecosystem expands, it’s getting harder for organizations to navigate the maze of cyber risks alone. Businesses, merchants, financial institutions and fintechs need trailblazing tools and expert knowledge to understand the risks they’re facing. 


Redefining Data Governance: Bridging The Gap Between Technical And Domain Experts

As the data industry gravitates toward decentralization, specifically federated systems, the absence of a robust framework in data governance, master data and data quality becomes glaringly evident. The prevailing issue in many companies is not the sheer volume of data or a lack of technological options but the erroneous assumption that their data is inherently primed for insights, AI applications and democratization. This misconception overshadows the real challenge: the need for a comprehensive approach to data management that integrates the expertise of domain professionals. The advent of practical AI applications marks a watershed moment in the history of data governance. This technology is not just a tool for automation; it serves as a bridge between the technical and business realms. It provides a platform where business experts can meaningfully contribute to data strategies and decision-making processes. Technical teams initially assumed the mantle of data governance out of necessity due to the requisite skill sets. 


Orchestrating Resilience Building Modern Asynchronous Systems

The first one is state management. Basically, the problem here is that you need to contemplate lots of possible combinations of states and events. For example, the "review received" message could come in while the campaign is in pending state instead of the relevant waiting state, or an out of sequence event could come in from somewhere, and so on. All of those cases need to be handled, even though they are not the most likely sequence of events and states. ... Handling retries becomes a task almost as complex as implementing primary logic, sometimes even more so. You can think of implementing your retry mechanisms in different ways, for example by storing a retry counter in the database and incrementing it on each failed attempt until either you succeed or reach the maximum allowed number of retries. Alternatively, you could embed the retry counter in the queue message itself, so you dequeue a message, process it, and, if it fails, re-enqueue the message and increment the retry count. In both cases this implies a huge overhead for developers.


Attackers deploy rootkits on misconfigured Apache Hadoop and Flink servers

In the attack chain against Hadoop, the attackers first exploit the misconfiguration to create a new application on the cluster and allocate computing resources to it. In the application container configuration, they put a series of shell commands that use the curl command-line tool to download a binary called “dca” from an attacker-controlled server inside the /tmp directory and then execute it. A subsequent request to Hadoop YARN will execute the newly deployed application and therefore the shell commands. Dca is a Linux-native ELF binary that serves as a malware downloader. Its primary purpose is to download and install two other rootkits and to drop another binary file called tmp on disk. It also sets a crontab job to execute a script called dca.sh to ensure persistence on the system. The tmp binary that’s bundled into dca itself is a Monero cryptocurrency mining program, while the two rootkits, called initrc.so and pthread.so, are used to hide the dca.sh script and tmp file on disk. The IP address that was used to target Aqua’s Hadoop honeypot was also used to target Flink, Redis, and Spring framework honeypots 


Merck's Cyberattack Settlement: What Does it Mean for Cyber Insurance Coverage?

The Merck and Mondelez cases are likely not going to be the last of their kind. More legal disputes between insurers and insureds, whether regarding war exclusions or other issues, could arise in the future. “I think that the cyber litigation is just getting started,” says Stern. More cases could drive change in the way cyber insurance companies approach risk tied to cyberattacks and what is considered cyberwarfare. When new risks challenge the existing approach to coverage, it drives industry change. “Maybe it takes a second or a third dispute to really achieve a definitive conclusion on that particular matter,” says Kannry. “Then, what can often happen is insurance industry says, ‘You know what, that type of loss needs to be understood and defined separately.’” Compared to many other insurance products, cyber insurance is relatively new. That means there remains plenty of room for the development of innovative ways to offer cyber insurance coverage. But the road forward likely won’t be without bumps for insurers and insureds.


Organizations Must Be Prudent To Realize Value In Generative AI

Rather than being swayed by the allure of generative AI capabilities, remain steadfast about the core features that can genuinely transform and enhance your operations. This pragmatic approach should be considered a short- to mid-term strategy for any forward-thinking organization. The reality is that features closely coupled with generative AI capabilities are still on the horizon. It will be at least a couple of years before they become commonplace. To navigate this transformative landscape effectively as an analytics professional, you must equip yourself with a deep understanding of generative AI. This proficiency will enable you to distinguish between features loosely coupled with generative AI and features that are natively and seamlessly integrated into the technology stack. Furthermore, keep a vigilant eye on the vendors supplying your critical business software. A vendor's stance and commitment to generative AI can profoundly impact how your organization operates in the future. 


LLM hype fades as enterprises embrace targeted AI models

LLMs were created by research teams exploring the capabilities of AI technology rather than as models designed to solve specific business problems. As a result, their capabilities are broad and shallow — writing a fairly generic email or press releases, for example. For the modern business, they have limited capabilities beyond that, requiring more data to produce results with any depth. While the AI landscape used to be dominated solely by OpenAI, major names in the tech world are beginning to outperform ChatGPT with their own LLMs, including Google’s new Gemini model. However, due to the broad capabilities of these new large language models, the text and image-based benchmarks used to determine the model’s prowess were just as general. These benchmarks ranged from simple multi-step reasoning to basic arithmetic. If an AI company’s gauge for a successful Generative AI platform is how correctly it can complete rudimentary math equations, that has little to no relevance for the work of an enterprise organization.



Quote for the day:

"Before you are a leader, success is all about growing yourself when you become a leader, success is all about growing others." -- Jack Welch

Daily Tech Digest - January 11, 2024

Four Ways the Evolution of AI Is Changing the Corporate Governance Landscape

There is no doubt that AI has been touted as the long-awaited answer to everyone’s productivity and efficiency woes. Tools like ChatGPT can do everything from generating interview questions to writing a song. It can create pictures, deliver data, and solve complex problems. Yet AI is not without its issues, and some believe that the most pressing dangers associated with this technology have not even begun to emerge. AI giants have been very clear that society must pay close attention to AI development. It’s crucial for directors and investors alike to understand that while science fiction movies seem like they belong in a fantasy realm, the reality they depict may not be as far-fetched as it seems. Similarly, scientists cannot take for granted that a bent toward corporate profit won’t motivate boards to push AI developers in that same direction. Instead of making an attempt to battle the behemoth of monetary thirst, it may be a better idea to come up with creative ways to make social goals and AI safety profitable. If developers can’t overcome the opposing viewpoint, why not try to find a way to join them?


The Incident Lifecycle: How a Culture of Resilience Can Help You Accomplish Your Goals

There are three points within the incident lifecycle where we can focus time and energy to improve the learning cycle and gain some bandwidth to improve resilience in the system. It’s not easy, because you’ll generally have to make small adjustments and changes along the way. CTOs won’t generally approve $100,000 for cross-incident analysis (that won’t be a marketable improvement to stakeholders) without evidence that it’s helpful. ... You need perspectives from across the organization. The discussion shouldn’t include only the incident manager and the person who pushed the bad code. I find that folks in marketing, product management, and especially customer support have great insights into the impact of an incident. When you meet, make sure it's an open conversation – the person facilitating should be talking less than anyone else in the room. This way, you will capture how this incident affected different groups. You may learn, for example, that the on-call engineer lacked dashboard access or customer support got slammed with complaints.


Nurturing Leadership Through The Power Of Reading

The most straightforward yet impactful way reading can contribute to self-development is through gaining knowledge. Whether extracting insights from books, articles or research papers, immersing oneself in written content is a foundational pillar of continuous development. This direct approach is not just about gathering information; it's also about internalizing concepts and lessons to create a reservoir of intellectual wealth for informed decision-making and sustained professional evolution. The simple power of reading remains a reliable means of absorbing knowledge—a timeless practice that can help propel individuals toward continuous growth and success. ... Reading also facilitates internal exploration. Self-help and philosophical literature invite introspection, which can nurture profound self-awareness. Atomic Habits by James Clear, for example, provides actionable insights for leaders seeking to enhance their habits and maximize their potential, fostering a deeper understanding of personal strengths and weaknesses. 


CI Is Not CD

A crucial difference I’ve often observed is that CI and CD tools have different audiences. While developers are often active on both sides of CI/CD, CD tools are frequently used by a wider group of people. ... CD tools have a range of subtle features that make it easier to handle deployment scenarios. They have a way to manage environments and infrastructure. This mechanism applies the correct configuration for each deployment and provides a way to handle deployments at scale, such as managing tenant-specific infrastructure or deployments to different locations (such as retail stores, hospitals or cloud regions). Alongside practical deployment features, CD tools also make the state of deployments visible to everyone who needs to know what software versions are where. This removes the need for people to ask for status updates, just as your task board handles work items. If you want to know your bank balance, you don’t want to phone your bank; you want to self-serve the answer instantly. The same is true for your deployments.


Managing CEO expectations is this year’s Priority No. 1

Today’s CEOs are more likely to get their IT visions from stories written by credulous writers authoring for online business media. That’s if we’re lucky. If we aren’t, they’ll want Tony Stark’s ability to conjure up high-tech solutions by gesticulating into a 3D touch interface while arguing with the AI that ran the Iron Man’s lab. That leaves it up to you, your company’s hard-working CIO, to temper the CEO’s expectations from what they infer from the Marvel Cinematic Universe to Earth 2024. Because CEOs’ real reality (“real” by definition) is likely to be disappointing compared to the MCU and other semi-fictional realities they see, hear of, or imagine, CIOs can worry a little less about how IT might disappoint them on this score. ... Okay, fair’s fair and fun’s fun. But few CEOs will be completely consumed by these semi-whimsical depictions of information technology’s future. They’ll continue to have practical concerns, too, like where all the money is that cloud computing was supposed to save them. Some disappointments, that is, are both evergreen and rooted in real reality. 


Embracing offensive cybersecurity tactics for defense against dynamic threats

The essence of a coalition approach in offensive cyber operations is straightforward: combining forces to enhance cyber defense capabilities. This approach is critical in today’s world, where cyber threats transcend national borders. By pooling resources, knowledge, and intelligence, a coalition approach facilitates a more comprehensive and effective response to cyber threats. In the financial industry for example we have FS-ISAC that supports all these. Effective implementation involves establishing clear communication channels, defining shared objectives, and ensuring mutual trust among participating entities. ... Looking ahead, the line between offense and defense in cybersecurity is blurring. The future I envision is one where these two are not distinct entities but different aspects of a singular, holistic strategy. Offensive tools will be used not just to attack but to inform, to scout for threats and act before they materialize. This integrated approach is akin to a martial artist’s stance, ready to block and strike simultaneously.


CES 2024: Will the Coolest New AI Gadgets Protect Your Privacy?

As Tschider points out, "COPPA doesn’t have any cybersecurity requirements to actually reinforce its privacy obligations. This issue is only magnified in contemporary AI-enabled IoT because compromising a large number of devices simultaneously only requires pwning the cloud or the AI model driving function of hundreds or thousands of devices. Many products don't have the kind of robust protections they actually need." She adds, "Additionally, it relies primarily on a consent model. Because most consumers don't read privacy notices (and it would take well over a hundred days a year to read every privacy notice presented to you), this model is not really ideal." For Tschider, a superior legal framework for consumer electronics might take bits of inspiration from HIPAA, or New York State's cybersecurity law for financial services. But really, one need only look across the water for an off-the-shelf model of how to do it right. For cybersecurity, the NIS 2 Directive out of the EU is broadly useful," Tschider says, adding that "there are many good takeaways both from the General Data Protection Regulation and the AI Act in the EU."


Critical Components for Data Fabric Success

In a physical data fabric, users access data, run analytics on it, or use APIs at a consumption layer to deliver the data wherever it is needed. Prior to that, data is modeled, prepared, and curated in the discovery layer, and transformed and/or cleansed as needed in the orchestration layer. In the ingestion layer, data is drawn from one or more data sources (which can be on premises or in the cloud) and stored in the persistence layer, which is usually a data lake or data warehouse. Logical data fabrics integrate data using data virtualization to establish a single, trusted source of data regardless of where the data is physically stored. This enables organizations to integrate, manage, and deliver distributed data to any user in real time regardless of the location, format, and latency of the source data. Unlike a logical data fabric, a physical data fabric requires the ability to physically centralize all the required data from multiple sources before it can deliver the data to consumers. Data also needs to be physically transformed and replicated every time and be adapted to each new use case. 


Boost Your Business With Digital Twin Technology

Digital twins allow businesses to answer questions that can directly impact strategic and operational decisions. “Organizations can move from answering simple questions about asset performance to understanding how these assets -- machines, assembly lines, supply chains -- will operate in the future, and what actions the business can take to meet performance and uptime goals,” Mann explains. Manufacturers are the businesses most likely to gain value from digital twin technology. “Manufacturers look to understand the causes of downtime, model scenarios to improve efficiency, and reduce waste,” says Devin Yaung, senior vice president, group enterprise, IoT products and services, at technology and business solutions provider NTT, in an email interview. Digital twins of individual machines permit instant views into maintenance issues and potential failures. “The growth of connected IoT sensors and devices has allowed all industries to gain insights into assets,” Yaung says. “Because of this explosion of connectivity, we are seeing large adoption not only in manufacturing but also in utilities, mining, hospitals, ports, airports, logistics/transportation, agriculture, and many other industries.”


Hey Gen. Z, you’re looking for tech jobs in all the wrong places

The pace of digital adoption and technological change today is far greater than it's ever been, according to Ger Doyle, senior vice president of US-based IT staffing firm Experis. The rise AI and genAI is likely to accelerate that trend, “so new graduates, as well as those in the workforce today, need to embrace a concept of life-long learning to stay relevant in the new world,” Doyle said. Pandor agreed: “Candidates should remain consistently curious throughout the job-searching process. Keeping up to date with the latest trends and developments in the digital world by reading technical news enables them to showcase their interest in the ever-changing sector when they do land a job interview. From a more practical perspective, talent can also continue to practice and enhance their technical skills while job hunting so that they are ready to hit the ground running.” Younger job candidates might not be aware of the breadth and diversity of roles available, Pandor said, and they shouldn’t rule out other opportunities early in their careers.



Quote for the day:

“Nobody talks of entrepreneurship as survival, but that’s exactly what it is.” -- Anita Roddick