Daily Tech Digest - November 16, 2022

Should we measure developer productivity?

If we concede that it is possible to measure developer productivity (a proposition that I am not completely sold on), we then must ask whether we should do that. The desire to do so is certainly strong. Managers want to know who their best developers are, and they want metrics that will help them at performance evaluation time. HR wants to be able to document performance issues. CEOs want to know that the money they are spending is being used effectively. Even if you use new tools to measure individual developer productivity, those metrics will likely be gamed. Lines of code is considered a joke metric these days. “You want lines of code? I’ll give you lines of code!” Is number of commits per day or average time to first PR comment any different? If you measure individual developers on these metrics, they will most definitely improve them. But at what cost? Likely at the cost of team productivity. An old CEO of mine used to say that software development is a team sport. If individual developers are measured against each other on any metric, they will start competing with each other, especially if money and promotions are on the line.


Technology spending will rise next year. And this old favourite is still a top priority

White says huge macro-economic pressures around the globe are causing senior executives to think much more carefully about how to get close to customers, to boost growth, and to potentially take cost out of the business. She also refers to pressures on supply chains. Executives have seen the disruptions caused first by the pandemic and then Russia's invasion of Ukraine, and are now looking for tools to respond flexibly to fluctuations in supply and demand. The solutions to many of these challenges, says White, are likely to come via technology. And for many businesses, the starting point for that response is going to a continued investment in cloud computing. This focus on on-demand IT might seem surprising. After a decade or more on the IT agenda, and a couple of years of targeted investment due to the pandemic, you'd be forgiven for assuming that a shift to cloud computing was yesterday's news. ... However, the Nash Squared survey shows that interest in the cloud is still very much today's priority. "It's still growing and evolving as a market, with a quite young set of technologies and capabilities," says White.


A modern approach to enterprise software development

Given the potential benefits that low-code tools offer in terms of enabling people in the business to develop their own software to improve the efficiency of the business processes with which they interact, the industry is recognising the massive risk that this poses. Dyson’s Wilmot said the business has concentrated on operational excellence focused on project audits, adding that people and the process around low-code development are crucial. He suggested that CIOs should decide: “Who will be your core low-code coders in IT and in the business?” Wilmot also urged CIOs considering the idea of opening up low-code development to business users who would like to code, to ensure that processes are in place to prevent the code they develop from “running wild”. Clearly there are numerous opportunities to improve on how things work, especially in organisations that have grown organically over time, where, to achieve a business objective, employees need to use numerous systems that don’t talk to each other. More often than not, data has to be rekeyed, which is both error-prone and labour-intensive.


Three Ingredients of Innovative Data Governance

The first important feature of innovative data governance is providing a data set that is statistically similar to the real data set without exposing private or confidential data. This can be accomplished using synthetic data. Synthetic data is created using real data to seed a process that can then generate data that appears real but is not. Variational autoencoders (VAEs), generative adversarial networks (GANs), and real-world simulation create data that can provide a basis for experimentation without leaking real data and exposing the organization to untenable risk. VAEs are neural networks composed of encoders and decoders. During the encoding process, the data is transformed in such a way that its feature set is compressed. During this compression, features are transformed and combined, removing the details of the original data. During the decoding process, the compression of the feature set is reversed, resulting in a data set that is like the original data but different. The purpose of this process is to identify a set of encoders and decoders that generate output data that is not directly attributable to the initial data source.


What’s Holding Up Progress in Machine Learning and AI? It’s the Data, Stupid

While companies are having some success in putting machine learning and AI into production, they would be further along if data management issues weren’t getting in the way, according to Capital One’s new report, “Operationalizing Machine Learning Achieves Key Business Outcomes,” which was released today. ... “There’s a real appetite to scale that thing quickly,” he says. “And if you don’t step back and say, hey, the thing you stood up in the sandbox, let’s actually make sure that you’re systematizing it, making it widely available, putting metadata on top of it, putting traceability and flows, and doing sort of all the foundational scaffolding and infrastructure steps that are needed for this thing to be sustainable and reusable. “That requires a ton of discipline and hygiene and potentially waiting a bit before the thing that you want to scale up starts to see impact in the marketplace,” Kang continues. “The temptation is always there. So what ends up happening, through no ill intent, is these proof of concepts start to see impact, and then and then all of a sudden you find yourself in a place where there’s a bunch of data silos and a bunch of other data engineering infrastructure challenges.”


Twitter's CISO Takes Off, Leaving Security an Open Question

Twitter made huge strides towards a more rational internal security model and backsliding will put them in trouble with the FTC, SEC, 27 EU DPAs and a variety of other regulators," he said — ironically, in a tweet. "There is a serious risk of a breach with drastically reduced staff." Many others also view the cuts and the exodus of senior executives — both voluntarily and involuntarily — as severely crippling the social media giant's capabilities, especially in critical areas such as security, privacy, spam, fake accounts, and content moderation. "These are huge losses to Twitter," says Richard Stiennon, chief research analyst at IT-Harvest. "Finding qualified replacements will be extremely expensive." Kissner's exit is sure to add to what many view as a deepening crisis at Twitter following Musk's takeover. Among those that have been axed previously are CEO Parag Agarwal, chief financial officer Ned Segal, legal chief Vijaya Gadde, and general counsel Sean Edgett. Teams affected by Musk's layoffs reportedly include engineering, product teams, and those responsible for content creation, machine learning ethics, and human rights.


How to prepare for ransomware

We know that bad actors are motivated by financial gains, and we are starting to see evidence where they are mining the exfiltrated data for additional sources of potential revenue. For many years, the cyber security community has been saying it’s not a case of “if” you’ll be attacked, but “when”. That being the case, it is important to examine all these phases and make sure that adequate time and effort is allocated to preparing to defend against and prevent an incident, while also conducting the requisite detection, response and recovery activities. IT security leaders should work under the assumption that a ransomware attack will be successful, and ensure that the organisation is prepared to detect it as early as possible and recover as quickly as possible. The ability to quickly detect and contain a ransomware attack will have the biggest impact on any outage or disruption that is caused. The first and most common question is: should the ransom be paid? Ultimately, this has to be a business decision. It needs to be made at an executive or board level, with legal advice. 


The unimon, a new qubit to boost quantum computers for useful applications

To experimentally demonstrate the unimon, the scientists designed and fabricated chips, each of which consisted of three unimon qubits. They used niobium as the superconducting material apart from the Josephson junctions, in which the superconducting leads were fabricated using aluminum. The team measured the unimon qubit to have a relatively high anharmonicity while requiring only a single Josephson junction without any superinductors, and bearing protection against noise. The geometric inductance of the unimon has the potential for higher predictability and yield than the junction-array-based superinductors in conventional fluxonium or quarton qubits. "Unimons are so simple and yet have many advantages over transmons. The fact that the very first unimon ever made worked this well, gives plenty of room for optimization and major breakthroughs. As next steps, we should optimize the design for even higher noise protection and demonstrate two-qubit gates," added Prof. Möttönen.


Data privacy: why consent does not equal compliance

A serious blind spot for brands is caused by consent models. Many organisations assume that obtaining consent from users to collect and process their data ensures compliance. In reality, consent does not equal compliance. Many brands operate under an illusion of compliance, when, in fact, they are routinely leaking personal data across their media supply chain and tolerating the unlawful collection and sharing of data by unauthorised third parties. Research from Compliant reveals that there are a number of ways in which brands are inadvertently putting themselves at risk. For example, our analysis shows that of the 91 per cent of the EU advertisers using a Consent Management Platform (CMP), 88 per cent are passing user data to third-parties before receiving consent to do so. While a properly implemented CMP is a useful tool for securing consent, integrating them with legacy technologies and enterprise architectures is clearly a problem. Another risk stems from “piggybacking”, where unauthorised cookies and tags collect data from brand websites without the advertiser’s permission. P


Machine learning: 4 adoption challenges and how to beat them

Machine learning algorithms may still behave unpredictably after training to prepare for data analysis. This lack of clarity might be an issue when leveraging AI in decision-making leads to unexpected outcomes. As the Harvard Business School reported in its 2021 Hidden Workers: Untapped Talent report, ML-based automated hiring software rejected many applicants due to overly rigid selection criteria. That’s why ML-based analysis should always be complemented with ongoing human supervision. Talented experts should monitor your ML system’s operation on the ground and fine-tune its parameters with additional training datasets that cover emerging trends or scenarios. Decision-making should be ML-driven, not ML-imposed. The system's recommendation must be carefully assessed and not accepted at face value. Unfortunately, combining algorithms and human expertise remains challenging due to the lack of ML professionals in the job market. 



Quote for the day:

"Good leaders must first become good servants." -- Robert Greenleaf

Daily Tech Digest - November 15, 2022

The Chief Trust Officer Role Can Be the Next Career Step for CISOs

Many CISOs are already unofficially doing the work that comes with the CTrO role, according to Pollard. They are doing customer-facing work, navigating third-party risk management, and focusing on enterprise resilience. “CISOs that spend more time on customer-facing activity, they are at companies that grow faster,” Pollard asserted. “Cybersecurity touches revenue, and security leaders that are able to carve out the time to focus on customer activity help drive hyper growth.” CISOs who are driving growth for their companies are playing an important part on the leadership team, and if they’ve been in the role for a long enough time, it could be time to ask the question “What comes next?” CISOs who have been in their position for 48 months are due for a title-level promotion, according to Pollard. And CTrO is that next step. ... Through his research, Pollard is seeing the CTrO role filled at a number of organizations. Cisco has a chief trust officer. So does SAP. “We're not talking about small, innovative startups. We're talking about goliath businesses that recognize the importance of trust in what they do,” Pollard said.


How regulation of the metaverse could impact your business

The regulatory challenges faced by Web3 are currently much fresher, arguably more nuanced and in some cases, urgent. It cannot be regulated as a single entity, as its multitude of use cases demand a multitude of approaches. Specific rules governing the security and availability of systems, finance, archives, identity and IP rights will need to be set. The good news is that policymakers could leverage Web3’s benefits to impose regulation. As it’s based on decentralisation and automation, it’s not far-fetched to imagine the technology being used to enforce and automate taxation, for example. Currently, Web3 platforms like cryptocurrency exchanges or NFT marketplaces aren’t standardised, with inconsistent UX and language used to communicate concepts. Often, these platforms have little or no duty to educate about safety or establish protections, and while platforms like Coinbase and OpenSea do a good job here, it’s far from the norm and scams are still commonplace owing to lack of understanding.


Private 5G drives sustainable and agile industrial operations

Looking at business outcomes such as sustainability and agility, the partners regard industrial private 5G as an enabler of digital transformation in smart manufacturing to help deliver connected worker applications, mobile asset applications and untethered fixed industrial asset applications. The former are seen as able to increase visibility and intelligence through mobile digital tools, such as analytics, digital twins and augmented reality (AR), while mobile asset applications increase agility and efficiency with autonomous vehicles, such as automated guided vehicles (AGVs) and autonomous mobile robots (AMRs). The consortium’s tests were run according to an established test plan provided by Rockwell Automation with success criteria of zero faults. It outlined a series of test cases to establish reliable Ethernet/IP standard and safety (CIP Safety) I/O connections from a GuardLogix area controller, with a range of requested packet interval (RPI) settings – the rate at which the controller and the I/O exchange data – over the 5G RAN to the FLEX 5000 standard and safety I/O.


Who Moved My Code? An Anatomy of Code Obfuscation

The best security experts will tell you that there’s never an easy, or a single solution to protect your intellectual property, and combined measures, protection layers and methods are always required to establish a good protective shield. In this article, we focus on one small layer in source code protection: code obfuscation. Though it’s a powerful security method, obfuscation is often neglected, or at least misunderstood. When we obfuscate, our code becomes unintelligible, thus preventing unauthorized parties from easily decompiling, or disassembling it. Obfuscation makes our code impossible, (or nearly impossible), for humans to read or parse. Obfuscation is, therefore, a good safeguarding measure used to preserve the proprietary of the source code and protect our intellectual property. To better explain the concept of obfuscation, let’s take “Where’s Waldo” as an example. Waldo is a known illustrated character, always wearing his red and white stripy shirt and hat, as well as black-framed glasses. 


Should security systems be the network?

The appeal and real benefits of having the security systems be the whole network are clearest for smaller and midsized companies. They are more likely to have uniform and relatively simple needs, and also to have thinner staffing. They are more likely to have difficulty affording, attracting, and retaining the talent they need in both security and networking. So, having just one platform to become expert in, one platform to train new staff on or to outsource the management of lets them make the most of the staff they have. The benefits are less clear for larger company. These tend to have more complex environments and requirements, and are less likely to tolerate the risks of monoculture given they are better able to staff for and support a blended ecosystem. So, should security systems be the network? For smaller organizations, it looks viable with the caveats outlined above. For most larger organizations, I think the answer is currently no. Instead, they should focus on making their network systems a bigger part of the security infrastructure.


Democratization Is The Key To Upskill At Work And Improve ROI

Creating actionable data and analytics programs to educate employees is one of the most effective ways to bridge the skills gap. We have seen successes with executive-sponsored datathons or when companies gamify their learning experience. We also think it’s important for technical data experts to act as mentors to knowledge workers with domain expertise and guide them through the analytics process. We believe this collaboration between technical experts and domain experts will help organizations achieve breakthroughs with their data faster. Finally, analytics needs to be easy, not complex. Organizations should invest in technologies that move away from being highly dependent on writing code. ... Data and analytics generate ROI in many ways. First are the time savings. Organizations that shift from spreadsheet-based processes save several hours per week, sometimes up to a third of their time per worker – multiply this by all the domain experts and knowledge workers still stuck in spreadsheets and you’ve got some serious time savings. This is just the tip of the iceberg.


Top cybersecurity threats for 2023

Disgruntled employees can sabotage networks or make off with intellectual property and proprietary information, and employees who practice poor security habits can inadvertently share passwords and leave equipment unprotected. This is why there has been an uptick in the number of companies that use social engineering audits to check how well employee security policies and procedures are working. In 2023, social engineering audits will continue to be used so IT can check the robustness of its workforce security policies and practices. ... Cases of data poisoning in AI systems have started to appear. In a data poisoning, a malicious actor finds a way to inject corrupted data into an AI system that will skew the results of an AI inquiry, potentially returning an AI result to company decision makers that is false. Data poisoning is a new attack vector into corporate systems. One way to protect against it is to continuously monitor your AI results. If you suddenly see a system trending significantly away from what it has revealed in the past, it’s time to look at the integrity of the data.


Corporate execs confident on sustainability goals, admit more work needed

Efforts to achieve sustainability goals can broadly be grouped into several areas: green resources procurement, which includes sustainable energy and water; operational efficiency, which includes the IT value chain, supply chain and other scope 3 emission sources that make up 40% of all greenhouse gas emissions; and end of lifecycle, including circular economy or recycling products to create new ones. For example, data centers and cloud industries tend to focus on green energy procurement (since they use a lot of energy to power data centers) as well as operational efficiency to reduce power usage, according to Abhijit Sunil, a senior analyst with Forrester Research. “Standards are certainly evolving, and more and more organizations are held accountable for their commitments and how they take action towards it,” Sunil said. For example, Sunil noted, government scrutiny will continue to increase, holding more “greenwashers” accountable. Greenwashers are companies that deceptively purport that their products, aims and policies are environmentally friendly.


The office of 2023: Top workforce trends that will shape the year ahead

Roderick believes an overarching theme for the workplace in 2023 will be adjusting how employees work remotely. He says there could be an uptick in surveillance for remote workers that will allow managers to observe productivity, and executives could enforce return-to-office mandates as a reaction to a slowdown in business. ... "The world of work has been through huge changes since the pandemic, and it would be good not to see the positives of this change undone by a recession." Silverglate believes that technology, office redesign, and sustainability will all propel hybrid and remote working in 2023. Video conferencing became a staple in work-from-home practices, but VR is emerging to make the experience more immersive and productive. "When many are in person and a team member needs to be virtual, VR technology can truly reduce the perceived gap between the two, which is one of the largest complaints I've heard about the challenges of traditional video-conferencing technology as it relates to hybrid teams," he says.


From Async Code Reviews to Co-Creation Patterns

The way it goes is that once a developer thinks they are done with coding, they invite other team members to review their work. This is nowadays, typically done by raising a Pull Request and inviting others for a review. But, because reviewers are busy with their own work items and a plethora of other things happening in the team, they are not able to react immediately. So, while the author is waiting for a review, they also want to feel productive, thus they start working on something else instead of twiddling their thumbs and waiting for a review. Eventually, when reviewer(s) become available and provide feedback on the PR and/or ask for changes, the author of the PR is then not available because they are busy with something else. This delayed ping-pong communication can extend over several days/weeks and a couple of iterations, until the author and reviewer(s) converge on a solution they are both satisfied with and which gets merged into the main branch.



Quote for the day:

"How was your day? If your answer was "fine," then I don't think you were leading" -- Seth Godin

Daily Tech Digest - November 13, 2022

Cybersecurity leaders want to quit. Here's what is pushing them to leave

Almost a third of chief information security officers (CISOs) and IT security managers in the UK and US are considering leaving their current organization, according to new research. Not only that, but a third are planning to quit their jobs within the next six months. ... many IT security leaders are struggling to keep up with evolving threats and new cybersecurity practices, while also reporting issues around recruitment, retention and work-life balance that are prompting many to turn away from the industry. When asked about the aspect of their role that they disliked most, 30% cited the lack of a work-life balance, with 27% saying that much time was spent on 'firefighting' rather than addressing strategic business issues. On top of the 32% of CISOs planning a departure due to the stresses of the job, 52%, admitted that they are struggling to keep up to date with new frameworks and models such as Zero Trust, while a further 20% felt that having the right skills on their team was "a serious challenge".


Why Is Optimism a Critical Security Skill?

There’s a different way to think about the practice of security: as a vision- and mission-based endeavor. When security practitioners log in each day to start work, they are protecting people they care about: their colleagues, partners and customers. They’re also safeguarding their organizations’ ability to do business in a complex world by delivering vital products and services that others need and ensuring society functions as intended. As a result, security teams are creating a better world for everyone. For employees in organizations, these connections may either be explicit or implied. Security professionals who protect national infrastructure for a government agency, a nonprofit’s ability to deliver aid or an e-commerce firm’s ability to deliver goods will likely see the value in safeguarding their organizations’ business and operations. Yet, countless others provide processes or services that enable the effective functioning of businesses and community life. These professionals, too, should take pride in fulfilling their organization’s vision and mission.


Networking and Data Center Horrors That Scare IT Managers

Carrie Goetz, D.MCO and Principal/CTO at StrategITcom, LLC, and a frequent speaker at Network Computing events, offered up some spooky incidents she has encountered over the years. “There was the case of cleaning people plugging vacuum cleaners into UPS outlets when cleaning the data center and shutting them down. Happened every night sometime between 2 and 4 AM. The only way we caught it was to sit up there at that time.” “Or how about doing an audit of the gear in a data center? The customer thought they had about 2,600 servers, and we found over 3,000 physical machines. Some had not passed a bit of traffic in years.” Talk about a nightmare. She noted, “decommissioning was not in their vocabulary until after the audit.” Another example should send chills down any IT manager’s spine. “We took over a contract for a prison health care provider. They had previously hired another company. When all of the deliveries were late, the customer started investigating and found out that the company was staging their servers in a shed with a dirt floor and no AC running. They kept going up and down, and two failed for dirt and moisture.”


Hervé Tessler – ‘Cyberattacks can mean total reputational death’

When I joined the business 33 years ago, nobody ever talked about cybersecurity. I don’t recall the word. Everything was physical: what if somebody’s gotten into my flat or I’m afraid someone’s going to take my car, attack me in the street to take my watch or my wallet or whatever. Obviously, the world has become much more digital. All these fears and threats became digital. What I would say over the past few years is that we’ve seen a massive amplification of risk. Large companies have a board and an IT group under the board looking at cybersecurity. They take it very seriously. They are scared to death of any brand damage. They are relatively focused, which is not the same as being well equipped. What I’ve found out over the last six months is that more and more small and mid-sized businesses are paying a lot more attention to cyber. When I open the newspaper, there’s not a day without a small story about a major cyber negative impact on a business. There was a recent cyberattack on a French hospital that sent them back to the Middle Ages – it lasted for months. Of course, SMBs taking cybersecurity seriously is an opportunity for us to help them with this threat.


Cyber criminals have World Cup Qatar 2022 in their sights

The Digital Shadows Photon research team have been tracking cyber threats coalescing around the World Cup over the past 90 days using a specially created alert system. They have found that broadly, threats to the event can be arranged into four categories – brand protection, cyber threat, physical protection and data leakages. Of these, most of the observed activity relates to the cyber threat category. “Scams could present themselves in many forms,” the Photon team wrote in a newly published online advisory. “For instance, financially motivated threat actors often plant in malicious URLs spoofing these events to fraudulent sites, hoping to maximise their chances of scamming naive internet users for a quick, illicit, profit. “At the same time, hacktivist groups may exploit the public attention given to such events to exponentially increase the reach of their message. State-sponsored advanced persistent threat (APT) groups may also decide to target global sporting events to achieve state goals to the hosting country or the broader event community.”


Agile or V-Shaped: What Should Be Your Next Software Development Life Cycle Model?

The agile model is known for its flexibility and responsiveness to change. This makes it ideal for projects that are constantly evolving or that require quick turnarounds. However, this flexibility can also be a downside, as it can lead to scope creep and unrealistic expectations. The V-shaped model is more rigid and structured, but this can also be seen as a strength. This model helps to prevent scope creep by clearly defining the deliverables at each stage of the project. It also provides more structure and transparency, which can help to keep stakeholders informed and on track. However, the downside of this model is that it can be inflexible and resistant to change. So, which model is best for your project? Ultimately, it depends on your specific needs and objectives. The Agile software development life cycle model is a great choice for small to medium-sized projects. This is because it offers flexibility and adaptability, which are essential when working on smaller-scale projects. So if you need a flexible and responsive approach, then the agile model may be a better fit.


Veteran CIOs on leading IT today

It’s a different role as you shift from manager to director. It’s realizing your entire organization is essentially run by someone else tactically, so really backing off and letting them fail or succeed on their own. And then you have to figure out how to focus very differently on alignment, making sure all the leaders are on the same page, because now you have really good leaders and they’re all running in different paths. In IT especially, managers tend to want to hang on to some of the hands-on work after they become directors, and that’s often because they’re promoted due to skill, not leadership. So suddenly they find themselves at a director level and they’re just a really good engineer. They don’t have any other tool in their toolkit except doing it themselves. So it’s hard for them to let others own things completely. It’s really important to get a good mentor or someone in place to help them. Usually what you hear is the horror stories, where someone fails miserably and then they learn and pick themselves up and go again. We’ve got to find a way to prevent that.


IT security: 3 areas to prioritize for the rest of 2022

If companies fail to frequently audit access policies to ensure that external groups can only access the systems they need, this is another avenue that hackers can easily exploit. It’s also essential to immediately cut off access after parting ways with a consultant and periodically confirm that former contractors no longer have access. Ensure an established timeline for auditing access policies – and never allow it to slip. Addressing password hygiene is another critical consideration. According to the most recent Verizon Data Breach Investigations Report, over 80 percent of hacking incidents involved stolen credentials. And studies have repeatedly shown that at least 71 percent of people reuse passwords. If just one of the sites associated with a reused password has been breached, then all other accounts protected by that password are also at risk. With workforce management challenges on the horizon for 2023, it’s essential to implement policies and procedures addressing the inherent security vulnerabilities of the Great Resignation.


Cookies for MFA Bypass Gain Traction Among Cyberattackers

Stealing session cookies has become one of the most common ways that attackers circumvent multifactor authentication. The Emotet malware, the Raccoon Stealer malware-as-a-service, and the RedLine Stealer keylogger all have functionality for stealing sessions tokens from the browsers installed on a victim's system. In August, security software firm Sophos noted that the popular red-teaming and attack tools Mimikatz, Metasploit Meterpreter, and Cobalt Strike all could be used to harvest cookies from the browsers' caches as well, which the firm called "the new perimeter bypass." "Cookies associated with authentication to Web services can be used by attackers in 'pass the cookie' attacks, attempting to masquerade as the legitimate user to whom the cookie was originally issued and gain access to Web services without a login challenge," Sean Gallagher, a threat researcher with Sophos, stated in the August blog post. "This is similar to 'pass the hash' attacks, which use locally stored authentication hashes to gain access to network resources without having to crack the passwords."


Scrutinising AI requires holistic, end-to-end system audits

To combat the lack of internal knowledge around how AI systems are developed, the auditing experts agreed on the pressing needed for a standardised methodology for how to conduct a socio-technical audit. They added that while a standardised methodology currently does not exist, it should include practical steps to take at each stage of the auditing process, but not be so prescriptive that it fails to account for the highly contextual nature of AI. However, digital rights academic Michael Veale said standardisation is a tricky process when it comes to answering inherently social questions. “A very worrying trend right now is that legislators such as the European Commission are pushing value-laden choices around fundamental rights into SDOs [standards development organisations],” he said ... Another risk of prescriptive standardisation, according to Brown, is that the process descends into a glorified box-ticking exercise. “There’s a danger that interrogation stops and that we lose the ability to really get at the harms if they just become standardised,” he said.



Quote for the day:

"What I've really learned over time is that optimism is a very, very important part of leadership." -- Bob Iger

Daily Tech Digest - November 10, 2022

Building Higher-Quality Software With Open Source CD

Prior to the rise of open source CD solutions, companies often relied on point automation using scripts. These could improve efficiency a bit, but when companies moved from the monolithic architecture of a mainframe or on-premises servers to a microservices-based production environment, the scripts could not be easily adapted or scaled to cope with the more complex environment. This led to the formulation of continuous delivery orchestration solutions that could ensure code updates would flow to their destination in a repeatable, orderly manner. Two highly popular open source CD solutions have emerged, Spinnaker and Argo. Spinnaker was developed by Netflix and extended by Google, Microsoft and Pivotal. It was made available on GitHub in 2015. Spinnaker creates a “paved road” for application delivery, with guardrails to ensure only valid infrastructure and configurations reach production. It facilitates the creation of pipelines that represent a software delivery process. These pipelines can be triggered in a variety of ways, including manually, via a cron expression, at the completion of a Jenkins Job or other pipeline and other methods. 


Technical Debt is Quantifiable as Financial Debt: an Impossible Thing for Developers

There are many things about technical debt that can be quantified. Henney mentioned that we can list off and number specific issues in code and, if we take the intentional sense in which technical debt was originally introduced, we can track the decisions that we have made whose implementations need to be revisited. If we focus on unintentional debt, we can look at a variety of metrics that tell us about qualities in code. There’s a lot that we can quantify when it comes to technical debt, but the actual associated financial debt is not one of them, as Henney explained: The idea that we can run a static analysis over the code and come out with a monetary value that is a meaningful translation of technical debt into a financial debt is both a deep misunderstanding of the metaphor – and how metaphors work – and an impossibility. According to Henney, quantifying how much financial debt is present in the code doesn’t work. At the very least, we need a meaningful conversion function that takes one kind of concept, e.g., "percentage of duplicate code" or "non-configurable database access", and translates it to another, e.g., euros and cents


How industrial IoT is forcing IT to rethink networks

IIoT is redefining the types of data that enterprises use, and how networks process this data. For example, an IIoT network primarily transmits and processes unstructured data, not fixed record transactional data. In contrast, the corporate network processes data that is far more predictable, digestible and manageable. The bulk and the traffic of IIoT data virtually makes it a necessity to implement a single, private, dedicated network to each manufacturing facility for use with its IoT. Security is also a concern, because the networks that operate on the edges of the enterprise must often be maintained and administered by non-IT personnel who don’t have training in IT security practices. It’s not uncommon for someone on a production floor to shout a password to another employee so they can access a network resource — nor is it uncommon for someone on the floor to admit another individual into a network equipment cage that is supposed to be physically secured and accessible by only a few authorized personnel.


Cloud architects are afraid of automation

As humans, we’re just not that good. While we have experience driving cars and can look out the front window, we don’t have a perfect understanding of current data, past data, and what this data likely means in the operation and driving of the vehicle. Properly configured automation systems do. For the same reasons that we are anxious when our cars drive away without us actively turning the wheel, we are slow to adopt automation for cloud deployments. Those charged with making core decisions about automating security, operations, finops, etc., are actively avoiding automation, largely because they are uncomfortable with critical processes being carried out without humans looking on. I get it. At the end of the day, automation is a leap of faith that the automated systems will perform better than humans. I understand the concern that they won’t work. The adage is true: “To really screw things up requires a computer.” If you make a mistake in setting these systems up, you can indeed do real damage. So, don’t do that. However, as many people also say: “The alternative sucks.” Not using automation means you’re missing out on approaches and mechanisms to run your cloud systems cheaper and more efficiently


Cybersecurity, cloud and coding: Why these three skills will lead demand in 2023

As the scale and growth of software development accelerates, and with ongoing AI developments in programming and engineering, the role requirements of software development also look set to change. "AI/ML are changing the world of programming much like the calculator and the computer changed the world," says Stormy Peters, VP of Communities at GitHub. "These technological advancements are taking care of a lot of the mundane, grunt work that developers once had to devote all their time to. Development looks different now." ... As we enter 2023 and software development remains at the heart of business strategies, problem-solving, critical thinking and other human skills will prove integral. "While emerging technologies will increasingly enable them to stay in the flow and solve challenging problems, the technicalities in being able to program, engineer, and develop code through a high level understanding of AI, DevOps, and programming languages will also stay central in importance to the discipline," she adds.


How to effectively compare storage system performance

The best metrics to compare are the ones most applicable to the applications and workloads you will run. If the application is an Oracle database, the performance metric most applicable is 8 KB mixed read/write random IOPS. When the vendor only provides the 4 KB variation, there is a way to roughly estimate the 8 KB results -- simply divide the 4 KB results in half. If the vendor objects, ask for actual 8 KB test results. Use this same simple math for other I/O sizes. Throughput is somewhat more difficult to standardize, especially if vendors don't supply it. You can roughly calculate it by multiplying the sequential read IOPS by the size of the I/O. Latency is the most difficult to standardize, especially when vendors measure it differently. There are many factors that affect application latency, such as storage system load, storage capacity utilization, storage media, storage DRAM caching, storage network congestion, application server load, application server utilization and application server contention. The most important question to ask is how the vendor measured the latency, under what loads and from where. 


8 secrets of successful IT freelancers

An often-overlooked skill is having the knowledge, courage, and ability to steer the client in the right direction. “The customer wants to use the freelancer’s experience and proactivity, therefore it’s very important that the IT freelancer states his or her true opinion when he or she thinks that the customer is moving in the wrong direction,” says Soren Rosenmeier, CEO of Right People Group, a firm that matches clients with IT and business consultants. Don’t jump the gun, however. Before offering any crucial advice, it’s important to have a complete understanding of the issue at-hand. “There might also be a lot of other factors … in the organization that the IT freelancer is unaware of,” Rosenmeier notes. Therefore, prior to offering a suggestion, it’s important to first listen to exactly what the client wants. If the IT freelancer is honest and upfront, the client will receive the benefit of hiring a highly experienced expert, including insights from all the experience the freelancer has gained by working with many other organizations. “At the same time, the customer gets the simplicity and the execution that they want from an external expert that’s hired in to do a specific job,” Rosenmeier says.


In a managed service model, who is responsible and accountable for data?

When it comes to ensuring compliance, since accountability always lies with the business, it is essential to ensure that the MSP is compliant before outsourcing any data management functions. However, before this can be done, it is essential to establish what exactly it is that needs to be complied with, which is often the most difficult question, with a myriad of regulations and legislation being applicable depending on the sector and regions the business operates in. There are two pillars to consider when engaging with an MSP in regards responsibility for data management, one being the data availability and recovery, and the second, the retention of data, however the requirements for compliance, and ultimately accountability, in each will depend on the individual business. This means that before your data can be deemed compliant, you need to understand what that means for your business and have a framework in place that outlines this.


What the experts say about the cybersecurity skills gap

In terms of the skills that are needed, all three cybersecurity leaders agreed that there are various technical skills necessary, similar to any IT role. However, Killian pointed out that not every cybersecurity role is purely a technical one. “Technical skills are usually easier to learn than other important skills like curiosity, ability to ‘play’ in the grey – security issues are rarely obvious ‘yes or no’ problems to solve – and the ability to build relationships with stakeholders. So, unless technical skills are required for the role at hand, they should be prioritised appropriately in job postings,” she said. ... Naidoo reaffirmed that great attitudes and high aptitudes are essential as “technical skills can be taught”. However, she also said it’s important to keep on top of how the tech industry is evolving. “Whatever technical skills are needed in the industry, a corresponding security skill is necessary to secure that technology. So, whether that’s blockchain, quantum or artificial intelligence, or even traditional functions like networks, operating systems and databases, one needs to understand these technologies in order to properly secure them.”


How organisations can right-size their data footprint

“Going on a data diet can be healthy. Cutting out all that junk data that bloats our systems costs us money, raises our data risks and distracts us from the nutritious data that will help us grow. Sometimes, less is truly more.” To reduce data risks and identify useful data, organisations can create synthetic data, which is artificially created data with similar attributes to the original data. According to Gartner, synthetic data will enable organisations to avoid 70% of privacy violation sanctions. Parker said: “If you have sensitive customer data that you want to use but you can’t, you could replace it with synthetic data without losing any of the insights it can deliver.” She added that this could also facilitate data sharing across countries and in industries such as healthcare and financial services. In the UK, for example, the Nationwide Building Society used its transaction data to generate synthetic datasets that could be shared with third-party developers without risking customer privacy, she said. Parker said synthetic data will also enable organisations to plug gaps in the actual data used by artificial intelligence (AI) models. 



Quote for the day:

"Leverage is the ability to apply positive pressure on yourself to follow through on your decisions even when it hurts." -- Orrin Woodward

Daily Tech Digest - November 09, 2022

5 ways to use predictive insights to get the most from your data

With the proliferation of SaaS tools, we seem to be collecting so much more data, yet most companies still struggle to integrate it properly to extract insights that would be indicative of future performance. There are a variety of reasons for that: internal data privacy, legacy mindset around who owns what data, lags in data warehousing strategy or operational know-how about the mechanics of integrating it. ... The CMO Survey found that after a decade of integrating customer data across channels, marketers are still struggling, with most giving their organization a 3.5 out of 7 score on the effectiveness of their customer information integration across purchasing, communication and social media channels. ... Too often organizations are overly focused on dashboards and analyzing past trends to determine future actions. Dashboards and reports are often thought of as the final deliverables of data, but this thinking is limiting data’s value. Think about how your acquisition, monetization and retention journeys are orchestrated today, then feed predictive scoring data right into those business systems and tools. 


Coming Clean: Why Cybersecurity Transparency Is A Strength, Not A Weakness

In the wake of the new disclosure proposals, the management of cybersecurity events can no longer be an afterthought in maintaining operating standards. It’s now been elevated to a major concern along with financial risks, such as capital and credit risk. Despite the technical challenges, compliance is generally straightforward. Organizations must develop discipline in how they detect and defend against cyber threats. In addition, they must improve the way they report on them. If they don’t want their next cyber incident to turn into a material event, they need to minimize the risk of a breach in the first place. Remember, the opposite of due diligence is negligence. One way to get started is to focus on the application layer, as that’s where the “money” is. Decades of focus on network-based threats have improved the protection from some cyberattacks, but many business applications remain vulnerable. Applications suffer numerous vulnerabilities outlined by the OWASP Top 10. These are known, common threats that can be countered by using Web application firewalls.


AI eye checks can predict heart disease risk in less than minute, finds study

“This AI tool could let someone know in 60 seconds or less their level of risk,” the lead author of the study, Prof Alicja Rudnicka, told the Guardian. If someone learned their risk was higher than expected, they could be prescribed statins or offered another intervention, she said. Speaking from a health conference in Copenhagen, Rudnicka, a professor of statistical epidemiology at St George’s, University of London, added: “It could end up improving cardiovascular health and save lives.” Circulatory diseases, including cardiovascular disease, coronary heart disease, heart failure and stroke, are major causes of ill health and death worldwide. Cardiovascular disease alone is the most common cause of death globally. It accounts for one in four deaths in the UK alone. While several tests to predict risk exist, they are not always able to accurately identify those who will go on to develop or die of heart disease. Researchers developed a fully automated AI-enabled tool, Quartz, to assess the potential of retinal vasculature imaging – plus known risk factors – to predict vascular health and death.


Mobile Application Security Best Practices

Strong credentials are a must for both web and mobile application development. For mobile apps, you can choose to either have a native login flow, which means the user enters their credentials within the app, or a web-based login flow, where they are directed to a web browser to login. Native login flows provide a better user experience but are generally thought to be less secure. Hypermedia authentication APIs are a solution now popping up to bridge this gap and provide the best of both worlds. Hypermedia authentication APIs interact with the authorization server directly without the need for an intermediary like the browser window. Regardless of how the user enters their credentials, your app should enforce some type of password policy to ensure a strong password is used, and it should not store the access and refresh tokens anywhere except secure storage (like the iOS keychain or Android Keystore). ... Finally, your mobile app should follow best practices for secure coding, just as you would with web applications. Security should be incorporated from the start of the app’s design, with testing occurring throughout the development process.


Cybersecurity threats: what awaits us in 2023?

Businesses will still be mostly concerned with ransomware. The conflict between Russia and Ukraine has marked an end to any possible law enforcement cooperation in the foreseeable future. We can therefore expect that cybercrime groups from either block will feel safe to attack companies from the opposing side. Some may even perceive this as their patriotic duty. The economic downturn will lead more people to poverty, which always translates to increased criminality, and we know ransomware to be extremely profitable. ... Zero trust will take on greater prominence with the continued role of the remote and hybrid workplace. Remote work will continue driving the need for zero trust since hybrid work is now the new normal. With the federal government mandating agencies to adopt zero-trust network policies and design, we expect this to become more common and the private sector to follow suit as 2023 becomes the year of verifying everything. ... In 2023, we might see a slight decline in the raw number of ransomware attacks, reflecting the slowdown of the cryptocurrency markets. 


Google and Renault are creating a 'software-defined vehicle'

Renault will leverage Google's Cloud technology to securely manage data capture and analytics. They'll also use Google's ML and AI capabilities. "Our collaboration with Renault Group has improved comfort, safety, and connectivity on the road," Sundar Pichai, CEO of Google and Alphabet, said in a statement. "Today's announcement will help accelerate Renault Group's digital transformation by bringing together our expertise in the cloud, AI, and Android to provide for a secure, highly-personalized experience that meets customers' evolving expectations." Google shares that some features of the SDV will include predictive maintenance, accurate real-time detection of vehicle failures, a better driving experience, and insurance models reflective of driving behaviors. "Equipped with a shared IT platform, continuous over-the-air updates, and streamlined access to car data, the SDV approach developed in partnership with Google will transform our vehicles to help serve future customers' needs," said Luca de Meo


Why automating finance is just an integration game

What is clear is the increasing demand for decision intelligence with financial analytics at its heart. RPA suppliers are increasingly repositioning themselves as automated intelligence companies, using RPA tools to drive key functions, such as finance. Gartner believes a third of large organisations will be using decision intelligence for structured decision-making to improve competitive advantage in the next two years. Recent research by enterprise application integration firm Jitterbit backs this up. Focusing on mid-sized companies (referred to as Mittelstand) in the DACH region (comprising Germany, Austria and Switzerland), Jitterbit found that 73% of these businesses want to be hyperautomated within three years because “the health of their company depends on it”. The barriers to achieving this are typical – too many manual data process, isolated data silos and a lack of departmental integration. What is becoming clear is that financial analytics can be the core and the catalyst of intelligent automation transformations. 


Detecting Cyber Risks Before They Lead to Downtime

To avoid costly downtime, threats to operational continuity must be detected and investigated as early as possible. That can be accomplished by scanning connected devices for configuration changes and vulnerabilities. However, unlike traditional IT, OT assets cannot be continuously scanned in the same manner and many risks will remain unnoticed. Instead, a system designed for manufacturing environments must have the ability to passively monitor the network infrastructure to locate assets and detect behavior changes and anomalies. That requires understanding dozens of industrial protocols and continuously monitoring the communications and checking against a database of OT/ICS-specific Indicators of Compromise (IOCs, or evidence of a breach) and CVEs. The bane of many monitoring systems is they produce a flood of information about potential harm, not all of it urgent. To be useful, critical alerts must be prioritized based on operational or cybersecurity risk so the right team can respond. For example, OT engineers need to quickly spot undesired process values, incorrect measurements or when a critical device fails so they can resolve issues more quickly.


Challenges to Successful AI Implementation in Healthcare

Incorporating AI systems could improve healthcare efficiency without compromising quality, and this way, patients could receive better and more personalized care. Investigations, assessments, and treatments can be simplified and improved by using AI systems that are smart and efficient. However, implementing AI in healthcare is challenging because it needs to be user-friendly and procure value for patients and healthcare professionals. AI systems are expected to be easy to use and user-friendly, self-instructing, and not require extensive prior knowledge or training. Besides being simple to use, AI systems should also be time-saving and never demand different digital operative systems to function. ... The healthcare experts noted that implementing AI systems in the county council will be difficult due to the healthcare system’s internal capacity for strategic change management. For the promotion of capabilities to work with implementation strategies of AI systems at the regional level, experts highlighted the need for infrastructure and joint ventures with familiar structures and processes. 


AI Ethics: Four Essentials CIOs Must Know

Enterprises must investigate how the data used to train the algorithm is used in order to develop explainable AI. Although this won’t address the bias issue, it will guarantee that firms are aware of the underlying causes of any problems so they can take appropriate action. Synthetic data, in addition to actual data sets, is essential for addressing ethical issues. For instance, synthetic data can be used to correct biases in real data that are unjust and skewed toward particular groups of individuals. Additionally, synthetic data can be used to boost the volume and produce an objective dataset if the volume is inadequate. ... Executives must design AI systems that can instantly identify fabricated data and immoral behavior. This necessitates screening suppliers and partners for the improper use of AI in addition to examining a company’s own AI. Examples include the employment of convincing false text and videos to discredit competitors or the use of AI to carry out sophisticated cyber-attacks. As AI technologies become more accessible, this problem will worsen.



Quote for the day:

"Good leaders make people feel that they're at the very heart of things, not at the periphery." -- Warren G. Bennis