Daily Tech Digest - May 13, 2021

Why VMware’s Tom Gillis Calls APIs ‘the Future of Networking’

Gillis says there are three steps to building “world-class security” in the data center. Step one is software-based segmentation, which can be as simple as separating the production environment from the development environment. “And then you want to make those segments smaller and smaller until we get them down to per-app segmentation, which we call microsegmentation,” he explained. Step two requires visibility into in-band network traffic. “We’re gonna go through on a flow-by-flow basis, and starting looking at, okay, this one is legitimate, and this one is WannaCry, and being able to figure that out using a distributed architecture,” Gillis said. Step three involves the ability to do anomaly detection, which allows analysts to find unknown threats as the attackers continually change their tactics. “Most security-conscious companies do this with a network TAP,” Gillis sas. These test access points (TAPs) allow companies to access and monitor network traffic by making copies of the packets. However, deploying all of these network TAPS, and storing the copies in a data lake becomes “very cumbersome, very operationally deficient,” Gillis said.


'FragAttacks' eavesdropping flaws revealed in all Wi-Fi devices

To be clear, these attacks would require the threat actor to be on the local network alongside the targets; these are not remotely exploitable flaws that could, for instance, be embedded in a webpage or phishing email. The attacker would either have to be on a public Wi-Fi network, have gotten access to a private network by obtaining the password or tricked their mark into connecting with a rogue access point. Thus far, there have been no reports of the vulnerabilities being exploited in the wild. Vanhoef opted to hold the public disclosure until vendors could be briefed and given time to patch the bugs. So far, at least 25 vendors have posted updates and advisories. Both Microsoft and the Linux Kernel Organization were warned ahead of time, and users can protect themselves by updating to the latest version of their operating systems. In a presentation set for the Usenix Security conference, Vanhoef explained how by manipulating the unauthenticated "aggregated" flag in a frame, instructions can be slipped into the frame and executed by the target machine. This could, for example, allow an attacker to redirect a victim to a malicious DNS server.


The state of digital transformation in Indonesia

Indonesian firms are facing people-related challenges most often in their digital transformation. Lack of technology skills and knowledge, and a shortage of employee availability, indicate a critical talent crunch. Firms may pay a price already for not putting employee experience higher on their business agenda. Besides data issues, challenges also include securing the digital transformation. Only 17% of firms in Indonesia are currently adopting a zero trust strategic cybersecurity framework. Fewer than 20% mention securing budgeting and funding for DT as a key challenge. This indicates that early successes are recognized in boardrooms and with executive leaders and that the vast majority of firms in Indonesia have prepared their transformation budgets well. Indonesian firms face fewer budget challenges for digital transformation than their peers in other markets. Firms are also prioritizing cloud and are building new applications and services primarily on public cloud. Tech executives in Indonesia therefore face their most immediate challenges around people, skills, and culture. Upskilling, retention, and aligning employee priorities to digital transformation are crucial for ongoing success—and firms must act immediately. Agile has successfully taken hold in IT organizations, but tech executives must take the lead and collaborate across lines of business to drive adaptiveness across the organization.


Law firms are building A.I. expertise as regulation looms

Just because A.I. is an emerging area of law doesn’t mean there aren’t plenty of ways companies can land in legal hot water today using the technology. He says this is particularly true if an algorithm winds up discriminating against people based on race, sex, religion, age, or ability. “It’s astounding to me the extent to which A.I. is already regulated and people are operating in gleeful bliss and ignorance,” he says. Most companies have been lucky so far—enforcement agencies have generally had too many other priorities to take too hard a look at more subtle cases of algorithmic discrimination, such as a chat bot that might steer certain white customers and Black customers to different car insurance deals, Hall says. But he thinks that is about to change—and that many businesses are in for a rude awakening. Working with Georgetown University’s Centre for Security and Emerging Technology and Partnership on A.I., Hall was among the researchers who have helped document 1,200 publicly reported cases of A.I. “system failures” in just the past three years. The consequences have ranged from people being killed to false arrests based on facial recognition systems misidentifying people to individuals being excluded from job interviews.


BRD’s Blockset unveils its white-label cryptocurrency wallet for banks

“The concept is really a result of learnings from working with our customers, tier one financial institutions, who need a couple things,” Traidman told TechCrunch. “Generally they want to custody crypto on behalf of their customers. For example, if you’re running an ETF, like a Bitcoin ETF, or if you’re offering customers buying and selling, you need a way to store the crypto, and you need a way to access the blockchain.” “The Wallet-as-a-Service is the nomenclature we use to talk about the challenge that customers are facing, whereby blockchain is really complex,” he added. “There are three V’s that I talk about: variety, a lot of velocity because there’s a lot of transactions per second, and volume because there’s a lot of total aggregate data.” Blockset also enables clients to add features like trading crypto or fiat or lending Bitcoin or Stablecoins to take advantage of high interest rates. Enterprises can develop and integrate their own solutions or work with Blockset’s partners. Other companies that offer enterprise blockchain infrastructure include Bison Trails, which was recently acquired by Coinbase, and Galaxy Digital.


Democratize Machine Learning with Customizable ML Anomalies

Customizable machine learning (ML) based anomalies for Azure Sentinel are now available for public preview. Security analysts can use anomalies to reduce investigation and hunting time as well as improve their detections. Typically, these benefits come at the cost of a high benign positive rate, but Azure Sentinel’s customizable anomaly models are tuned by our data science team and trained with the data in your Sentinel workspace to minimize the benign positive rate, providing out-of-the box value. If security analysts need to tune them further, however, the process is simple and requires no knowledge of machine learning. ... A new rule type called “Anomaly” has been added to Azure Sentinel’s Analytics blade. The customizable anomalies feature provides built-in anomaly templates for immediate value. Each anomaly template is backed by an ML model that can process millions of events in your Azure Sentinel workspace. You don’t need to worry about managing the ML run-time environment for anomalies because we take care of everything behind the scenes. In public preview, all built-in anomaly rules are enabled by default in your workspace. 


How to stop AI from recognizing your face in selfies

Most of the tools, including Fawkes, take the same basic approach. They make tiny changes to an image that are hard to spot with a human eye but throw off an AI, causing it to misidentify who or what it sees in a photo. This technique is very close to a kind of adversarial attack, where small alterations to input data can force deep-learning models to make big mistakes. Give Fawkes a bunch of selfies and it will add pixel-level perturbations to the images that stop state-of-the-art facial recognition systems from identifying who is in the photos. Unlike previous ways of doing this, such as wearing AI-spoofing face paint, it leaves the images apparently unchanged to humans. Wenger and her colleagues tested their tool against several widely used commercial facial recognition systems, including Amazon’s AWS Rekognition, Microsoft Azure, and Face++, developed by the Chinese company Megvii Technology. In a small experiment with a data set of 50 images, Fawkes was 100% effective against all of them, preventing models trained on tweaked images of people from later recognizing images of those people in fresh images. 


Agile Transformation: Bringing the Porsche Experience into the Digital Future with SAFe

Agile means, in fact, many things, but above all, it is a shared commitment. What really matters are the underlying values such as openness, self-commitment, focus. Not to forget the main principles behind agile work: customer orientation, embracing change and continuous improvement, empowerment and self-organization, simplicity, and transparency. In other words, what we learned quite early on is the importance to establish not only ambitious goals but also a shared vision across teams. That requires bringing together different goals and building alignment around a common purpose. Furthermore, we have learned that it is important to focus on incremental change. We now focus on a small number of topics and pursue them persistently. Transformation takes time. Lifelong learning also means that change is an ongoing process — it never ends. Sometimes, change may be hard, but we are not alone. It affects many areas outside the Digital Product Organization and it is essential that we take others along on the journey. Finally, it is important to keep in mind that successful and long-lived companies are usually the ones that learn to be agile and stable at the same time.


Recruiting and retaining diverse cloud security talent

The first step to encouraging more diversity within the cyber security workforce is representation. Businesses need to look at their teams and collaborate with their community and industry to create a platform that will inspire individuals into industries they may not have considered before. For example, company representatives at events act as role models, and their individual passion can be a strong inspiration and draw for a wide range of candidates. For this reason, it’s vital that security and cloud teams – and in particular members from diverse backgrounds – have a voice on traditional media and social platforms. Diverse voices should be seen and heard in newspapers, on corporate blogs, and in broadcast, where they can share insight into their careers and expertise, encouraging new talent to join the industry and their business specifically. Similarly, mentorship programmes help businesses to attract and retain talent. For those moving into the industry, changing companies, or transitioning into a new role, having a mentor provides support, the comfort of representation, and showcases their achievements. 


3 areas of implicitly trusted infrastructure that can lead to supply chain compromises

Once the server a software repository is hosted on is compromised, an attacker can do just about anything with the repositories on that machine if the users of the repository are not using signed git commits. Signing commits works much like with author-signed packages from package repositories but brings that authentication to the individual code change level. To be effective, this requires every user of the repository to sign their commits, which is weighty from a user perspective. PGP is not the most intuitive of tools and will likely require some user training to implement, but it’s a necessary trade-off for security. Signed commits are the one and only way to verify that commits are coming from the original developers. The user training and inconvenience of such an implementation is a necessary inconvenience if you want to prevent malicious commiters masquerading as developers. This would have also made the HTTPS-based commits of the PHP project’s repository immediately suspicious. Signed commits do not, however, alleviate all problems, as a compromised server with a repository on it can allow the attacker to inject themselves into several locations during the commit process.



Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek

Daily Tech Digest - May 12, 2021

Web Application Security is not API Security

Broken Object Level Authorization (BOLA) is number one on the API Top 10 list. Uber’s API had this vulnerability. Thankfully, it was discovered by security researchers before malicious actors did damage (as far as we know). But it illustrates well how dangerous BOLA can be. The vulnerability leaked sensitive information, including an authentication token that could be used to perform a full account takeover. The vulnerability appears when a new driver joins the Uber platform. The browser sends the “userUuid” to the API endpoint, and the API returns data about the driver used to populate the client. ... Application security technologies such as Web Application Firewall, Next Generation Web Application Firewall, and Runtime Application Self-Protection don’t typically find the kinds of vulnerabilities we’ve discussed. These attacks present as ordinary traffic, so many defenses let them through. The future of API security lies with business logic. Application security tools have to understand the application’s context and business logic to know that non-managers shouldn’t be adding collaborators to a store or that a client shouldn’t access that user ID’s information.


Why IT should integrate information security with digital initiatives

Integrating information security with digital initiatives can also go a long way in dealing with the rise in ransomware attacks, as explained by W. Curtis Preston, chief technical evangelist at Druva. “Ransomware attacks are particularly becoming a daily occurrence, and it’s only gotten worse since the pandemic,” said Preston. “Just last year, the FBI reported a 400 percent increase in ransomware, and the rates of these attacks are not predicted to slow down anytime soon. These attacks not only cause significant financial damage, but can diminish a brand’s reputation and customer trust. “For organisations looking to remain secure while keeping pace with today’s digitised business landscape, integrating security with digital initiatives is imperative. A holistic approach to security that includes detection, resilience, and data recovery will allow organisations to mitigate cyber risk and thrive in today’s digital landscape. “Security must also be embedded into the organisation’s culture. This means prioritising security, and ensuring that security experts are involved in critical business decision making from an early stage....”


UK to fund national cyber teams in Global South

Raab said the UN’s recent unanimous agreement on principles for how states should operate in cyber space was an important stepping stone, but the UK now wanted wider agreement on how to respond to nation-states that “systematically commit malicious cyber attacks”. “We have got to win hearts and minds across the world for our positive vision of cyber space as a free space, open to all responsible users and there for the benefit of the whole world,” said Raab. “And, frankly, we’ve got to prevent China, Russia and others from filling the multilateral vacuum. That means doing a lot more to support the poorest and most vulnerable countries. “Today I am very pleased to announce that the UK government will invest £22m in new funding to support cyber capacity building in those vulnerable countries, particularly in Africa and the Indo-Pacific. “That money will go to supporting national cyber response teams, advising on mass online safety awareness campaigns, and collaborating with Interpol to set up a new cyber operations hub in Africa. The idea of that will be to improve co-operation on cyber crime investigations, and support the countries involved to mount joint operations.”


Why You Should Be Prepared to Pay a Ransom

A clear-eyed approach to the ransomware threat also makes it easier to handle the PR fallout from an attack. That's partly because you can plan ahead, and figure out how to create a crisis communications strategy that's aligned with the reality of the situation you find yourself in. Just as importantly, though, it's far easier to explain your ransom payments to customers and shareholders if you've been upfront about the risks you face, and haven't previously claimed that you'd never, ever pay to retrieve stolen data. Perhaps the most important reason to be honest about your ransomware response strategy, though, is that it gives you full visibility into the true cost of ransomware attacks, which in turn allows you to make more realistic cybersecurity ROI calculations. When you know how much a ransomware attack will cost you — including the ransom, the fines, and the potential damage to your brand — then you can make smarter and more informed decisions about how much you should be investing in cybersecurity designed to keep your data safe. It's always better, after all, to make sensible investments in security upfront and avoid getting hacked in the first place.


Prediction: The future of CX

Predictive CX platforms allow companies to better measure and manage their CX performance; they also inform and improve strategic decision making. These systems make it possible for CX leaders to create an accurate and quantified view of the factors that are propelling customer experience and business performance, and they become the foundation to link CX to value and to build clear business cases for CX improvement. They also create a holistic view of the satisfaction and value potential of every customer that can be acted upon in near real time. Leaders who have built such systems are creating substantial value through a wide array of applications across performance management, strategic planning, and real-time customer engagement. ... Prioritizing CX efforts through intentional strategic planning is another promising use case for data-driven systems that allow CX leaders to understand which operational, customer, and financial factors are creating systemic issues or opportunities over time. One US healthcare payer, for example, built a “journey lake” to determine how to improve its customer care.


Artificial intelligence revolution offers benefits and challenges

"Knowing where and how to use AI is not always easy. Innovation in business is typically incremental and rarely transformative to the extent that more vociferous AI hype suggests. Because AI and the automation it entails come at a cost, businesses will need to find the optimal level of AI that integrates the new with the old, balancing the costs of acquisition and disruption with productivity, quality and flexibility needs and expectations." "This leads us to a second caveat. AI, especially 'affordable AI," still has limited capabilities. AI relies on data utilization and exploitation. From a business perspective, this requires knowing what data the business has—and its potential value to improving products or processes. These are not givens." The authors nominate ethical dilemmas as a third risk facing the uptake of AI. They note the negative experiences "of Microsoft's racist chatbot (a poorly developed chatbot that mimicked the provocative language of its users), Amazon's AI-based recruitment tool that ignored female job applicants (and) Australia's Robodebt. Garbage in, garbage out." At the very least, these AI tools were not trained appropriately. But training isn't everything. 


Three post-pandemic predictions for the world of work

Let’s face it — many executives in senior leadership positions do not invest much time or energy in the leadership part of their role. Instead, they might simply be swept along by the busywork of endless meetings, or they are focused more on advancing their own careers and engaging in corporate politics. But the actual work of leadership is more intentional. With so many companies adopting policies that allow for remote work, the burden that is shifted onto leadership is greater. The C-suite needs to articulate the strategy in ways that provide clear signals to everyone in terms of what they should be working on and why it’s important. And with so many people working out of the office, fostering and embedding the corporate culture will have to become a priority, given that colleagues may seem more like ships passing in the night (if they meet in person at all). Leaders will have to work overtime to share the company’s values, and the stories behind them. And they’ll need to reinforce those values at every employee touch point if people are going to adopt them as they did when everyone was together.


The best CISOs think like Batman, not Superman

Why should CISOs learn to think like Batman? For starters, Batman knows that fighting crime isn’t a popularity contest and doesn’t expect thanks from the people he’s trying to protect. In the same way, CISOs should accept that if they’re popular, they’re probably doing their job wrong. People should feel a bit of angst when the CISO’s shadow falls over their desk — because the CISO should be prodding them to make uncomfortable decisions, badgering them to do better, and preventing them from settling into complacency. Your role isn’t to keep people happy — it’s to keep them safe, despite the groaning and muttering your efforts inspire. Batman also knows that you can’t fight crime by basking in the sunshine. Instead, you’ve got to know the city’s underbelly and fight crooks and gangsters on their own turf. In just the same way, CISOs need to live with a foot in the underworld. It’s only by understanding the way that hackers think and operate that you can hope to keep your organization safe, and that means knowing your way around the murkier corners of the dark web and spending plenty of time tracking the scripts, strategies, and other dirty tricks being shared by the black-hat crowd.


AI offers an array of document processing opportunities

Utilising neural networks, AI-driven document processing platforms offer a leapfrog advance over traditional recognition technologies. At the outset, a system is ‘trained’ so that a consolidated core knowledge base is created about a particular (spoken) language, form and/or document type. In AI jargon, this is known as the ‘inference’. This knowledge base then expands and grows over time as more and more information is fed into the system and it self-learns – able to recognise documents and their contents as they arrive. This is achieved given a feedback ‘re-training loop’ is used – think of it as supervised learning overseen by a human – whereby errors in the system are corrected when they arise so that the inference (and the meta data underlying it) updates, learns and is able to then deal with similar situations on its own when they next appear. It’s not dissimilar to how the human brain works, and how children learn a language. In other words, the more kids talk, make mistakes and are corrected, the better they get at speaking. The same is true with AI when applied to document analysis and processing. The inference becomes ever more knowledgeable and accurate.


The Long Road to Rebuilding Trust after 'Golden SAML'-Like Attacks

Re-establishing trust in the aftermath of a Golden SAML attack or similar attacks can be potentially disruptive. If an organization suspects that a Golden SAML attack has been used against them, the most important step is to rotate the token signing and token encryption certificates in ADFS twice in rapid succession, says Doug Bienstock, manager at FireEye Mandiant's consulting group. This action should be done in tandem with traditional eradication measures for blocking any known malware and resetting passwords across the enterprise, he says. Organizations that don't rotate — or change — the keys twice in rapid succession run the risk of a copy of the previous potentially compromised certificates being used to forge SAML tokens. CyberArk's Reiner says key rotation could cause disruption if security teams are not prudent about how it is implemented. "Rotating means revoking the old key and creating a new one," he says. "That means you have removed the trust between your own network and other cloud services." In normal situations, when an organization wants to rotate existing keys, there's a grace period during which the old key will continue to work while the new one is rolled out.



Quote for the day:

“One of the most sincere forms of respect is actually listening to what another has to say.” -- Bryant H. McGill

Daily Tech Digest - May 11, 2021

5 reasons why companies need to develop and prioritize their AI Business strategies now!

We live in a technological moment comparable to the early days of the Internet, which sparked fascination — and even some apocalyptic predictions and fears — when it first appeared. Today, we can all see that the Internet is such a natural part of our everyday lives that we only notice its existence when it’s missing. I’m betting my career that the same will soon happen with AI. But for companies to benefit from this disruption, they have to stop focus on the technology itself and see it as an amplifier of the value that can be offered to customers, which is at the heart of business strategies. Many of the discussions about Artificial Intelligence in teams, departments, and companies today still start with the question, “What data do we have, and what technology do we need to work with it?” Please don’t do that anymore! It’s time to turn the tables and look at what the customers’ needs are. What are the questions your customers are asking that, when answered, will generate value for them and the business? Only when you have this clarity you should start looking for the correct data and related AI applications.


Critical Infrastructure Under Attack

Unlike ransomware, which must penetrate IT systems before it can wreak havoc, DDoS attacks appeal to cybercriminals because they're a more convenient IT weapon since they don't have to get around multiple security layers to produce the desired ill effects. The FBI has warned that more DDoS attacks are employing amplification techniques to target US organizations after noting a surge in attack attempts after February 2020. The warnings came after other reports of high-profile DDoS attacks. In February, for example, the largest known DDoS attack was aimed at Amazon Web Services. The company's infrastructure was slammed with a jaw-dropping 2.3 Tb/s — or 20.6 million requests per second — assault, Amazon reported. The US Cybersecurity and Infrastructure Security Agency (CISA) also acknowledged the global threat of DDoS attacks. Similarly, in November, New Zealand cybersecurity organization CertNZ issued an alert about emails sent to financial firms that threatened a DDoS attack unless a ransom was paid. Predominantly, cybercriminals are just after money. 


9 tips for speeding up your business Wi-Fi

For larger networks, consider using a map-based Wi-Fi surveying tool such as those from AirMagnet, Ekahau or TamoGraph during deployment and for periodic checks. Along with capturing Wi-Fi signals, these tools enable you to run a full RF spectrum scan to look for non-Wi-Fi interference as well. For ongoing interference monitoring, use any functionality built into the APs that will alert you to rogue APs and/or other interference. Map-based Wi-Fi surveying tools usually offer some automated channel analysis and planning features. However, if you're doing a survey on a smaller network with a simple Wi-Fi stumbler, you'll have to manually create a channel plan. Start assigning channels to APs on the outer edges of your coverage area first, as that’s where interference from neighboring wireless networks is most likely to be. Then work your way into the middle, where it’s more likely that co-interference from your own APs is the problem. ... If you have more than one SSID configured on the APs, keep in mind that each virtual wireless network must broadcast separate beacons and management packets. This consumes more airtime, so use multiple SSIDs sparingly.


Increasing Developer Effectiveness by Optimizing Feedback Loops

The key here is usefulness and empowerment. If the process is useful, accurate and easy to interrupt, and the team is empowered to optimize, then it will naturally get shortened. Developers will want to run useful validations more often, earlier in the cycle. Take the example of regression tests that are tests owned by another siloed team, that are flaky, slow, run out of cycle and hard to figure out what is wrong in them. It is unlikely that they will get optimized, because the developers don’t perceive much value in them; whereas with a test suite based on the test pyramid that is owned by the team, is in the same code base, and is the gate to deployment to all environments, the team will come up with ways of improvement. You can apply the concept of feed loops to different scales, for example, super small loops, e.g. when the developer is coding, what feedback can we give them to help and nudge them? How does your IDE inform you have made a mistake, or how does it help you find the syntax for the command you are looking for? When we look at the developer flow, discovering information is a big source of friction.


Coding interviews suck. Can we make them better?

Another issue is that technical interviews aren't standardized, meaning they can vary wildly from company to company – making it almost impossible for candidates to prepare fully. As a result, their fate rests largely in the hands of whoever is carrying out the interview on that day. "The interviews typically are not well thought out and not structured," says Tigran Sloyan, co-founder and CEO of assessment platform CodeSignal. "What typically happens is, you have a developer whose job it is to evaluate this person. Most developers don't have either practice or understanding of what it means to conduct a structured interview." When there's so much variability, biases begin making their way into the process, says Sloyan. "Where there's lack of structure, there is more bias, and what ends up happening is if whoever you're interviewing looks like you and talks like you, you [the interviewer] start giving them more hints, you start leading them down the right paths." The reverse is also true, Sloyan says. "If they don't look like you and talk like you, you might throw them a couple more curveballs, and then good luck making it through that process."


RPA and How It's Adding Value in the Workplace

While automation is the use of advanced technologies to replace humans in low-value, repetitive, and tedious tasks with the goal of increasing profitability and lowering operating costs. It benefits businesses by allowing them more flexibility in how they operate and by increasing employee productivity by automating the most time-consuming and repetitive tasks so that employees can focus on more important tasks. In practice, the objective is to improve workflows and processes. Alex Kwiatkowski is principal industry consultant for financial services at UK-based SAS. He points out that banks are constantly seeking efficiency gains. Firms have long sought to remove time-consuming, and occasionally error-prone, manual tasks in favor of technology-infused, straight-through processing in the front, middle and back offices. More than just fanciful hyperbole, automation has proven an enabler to achieving such goals. No matter the bank, there's always room for improvement. Keep in mind, too, that these advances need not be giant leaps forward but are often the aggregation of marginal gains, if you will. 


Shedding Light on the DarkSide Ransomware Attack

Like other gangs that operate modern ransomware codes, such as Sodinokibi and Maze, DarkSide blends crypto-locking data with data exfiltration and extortion. If they are not initially paid for a decryption key, the attackers threaten to publish confidential data they stole from the victim and post it on their dedicated website, DarkSide Leaks, for at least 6 months. When a ransom note appears on an encrypted networked device, the note also communicates a TOR URL to a page called “Your personal leak page” as part of the threat that if the ransom is not paid, data will be uploaded to that URL. Ransom is demanded in Bitcoin or Monero. If it is not paid by a specific initial deadline, the amount doubles. ... Most ransomware operators understand that they need speed to encrypt as much data as possible as quickly as they can. They, therefore, opt to use symmetric encryption for that first phase and then encrypt the first key with an asymmetric key. In DarkSide’s case, they claim to have come up with an accelerated implementation; the malware uses the Salsa20 stream cipher to encrypt victim data.


Can We Trust the Cloud Not to Fail?

Reducing, or transforming a failure detector algorithm with one set of completeness and accuracy properties into a failure detector algorithm with another set of such properties means finding a reduction or transformation algorithm that can complement the original failure detection algorithm and guarantee that it will behave the same way as the target failure detection algorithm in the same environment given the same failure patterns. This concept is formally called reducibility of failure detectors. Because in reality it can be difficult to implement strongly complete failure detectors in asynchronous systems, per T.D. Chandra and S. Toueg we can transform failure detectors with weak completeness class into failure detectors with strong completeness. This concept is formally called reducibility of failure detectors. We can say that the original failure detector algorithm based on timeouts (described earlier) was reduced or transformed into an Eventually Weak Failure Detector by using increasing timeouts. As T.D. Chandra and S. Toueg showed that it is also possible to transform failure detectors with weak completeness into failure detectors with strong completeness.


The 6 Dimensions of a Winning Resilience Strategy

As CIOs look to pivot to this new and more complete approach to building resilience, they will be coming from different places in the journey depending on their existing business strategy and investments. The good news is that this journey does not need to be undertaken in one swoop. CIOs can start with the quick wins, such as identifying and deploying the right collaboration tools, before moving on to longer-term processes, such as how best to migrate applications to the cloud and embrace cloud-native models. The starting point should be a resilience roadmap so you can plan when best to address the various dimensions of your strategy. This needs to be supported by a focus on your people to ensure they are able to leverage new technologies to their fullest and understand how to thrive in their work no matter where they are. Identifying what needs to change and putting in place an effective resilience strategy is now a critical business differentiator. In the past, business resilience was about doing the best in tough times, and often CIOs’ focus was on robust oversight and control to ensure that there were security breaches while business-as-usual was disrupted.


Cloud data and security — what every healthcare technology leader needs to know

Knowing where an organisation’s data resides, who owns that data and what type of data it is, will ease any security incident and any legal or compliance implications. It will also ease an organisation’s ability to manage risk and improve their response over time. Commenting on the importance of knowing your data, William Klusovsky, Global Cybersecurity Strategy, Governance, Risk & Compliance Offering Lead at Avanade, said: “Often, technology leaders will forget that asset management is not just about keeping track of hardware, it means knowing where your data is, where your data flows and who owns that data.” The challenge of having a holistic view of an organisation’s data landscape is intensified by the problem of Shadow IT — the procurement of software and tech without IT’s knowledge. As new systems and applications are onboarded by various departments it’s easy to lose track of these, and what data sits within them, without a strong systems acquisition process. With healthcare specifically, the rapid introduction of IoT medical devices and all the new data they’re generating, exemplifies this. 



Quote for the day:

"Stories are the single most powerful weapon in a leader's arsenal." -- Howard Gardner

Daily Tech Digest - May 10, 2021

How to minimise technology risk and ensure that AI projects succeed

Organisations are using lots of different technologies and multiple processes to try and manage all this, and that’s what’s causing the delay around getting models into production and being used by the business. If we can have one platform that allows us to address all of those key areas, then the speed at which an organisation will gain value from that platform is massively increased. And to do that, you need an environment to develop the applications to the highest level of quality and internal customer satisfaction, and an environment to then consume those applications easily by the business. Sounds like the cloud, right? Well, not always. When you look at aligning AI, you also have to think about how AI is consumed across an organisation; you need a method to move it from R&D into production, but when it’s deployed, how do we actually use it? What we are hearing is that what they actually want is a hybrid development and provisioning environment, where this combination of technologies could run with no issues, no matter what your development or target environment is, such as on cloud, on-premise, or a combination.


Getting a grip on basic cyber hygiene

In regard to cyber defense, basic cyber hygiene or a lack thereof, can mean the difference between a thwarted or successful cyber-attack against your organization. In the latter, the results can be catastrophic. Almost all successful cyber-attacks take advantage of conditions that could reasonably be described as “poor cyber hygiene” – not patching, poor configuration management, keeping outdated solutions in place, etc. Inevitably, poor cyber hygiene invites risks and can put the overall resilience of an organization into jeopardy. Not surprisingly, today’s security focus is on risk management: identifying risks and vulnerabilities, and eliminating and mitigating those risks where possible, to make sure your organization is adequately protected. The challenge here is that cybersecurity is often an afterthought. To improve a cybersecurity program, there needs to be a specific action plan that the entire cyber ecosystem of users, suppliers, and authorities (government, regulators, legal system, etc.) can understand and execute. That plan should have an emphasis on basic cyber hygiene and be backed up by implementation guidance, tools and services, and success measures.


Get started with MLOps

Getting machine learning (ML) models into production is hard work. Depending on the level of ambition, it can be surprisingly hard, actually. In this post I’ll go over my personal thoughts (with implementation examples) on principles suitable for the journey of putting ML models into production within a regulated industry; i.e. when everything needs to be auditable, compliant and in control — a situation where a hacked together API deployed on an EC2 instance is not going to cut it. Machine Learning Operations (MLOps) refers to an approach where a combination of DevOps and software engineering is leveraged in a manner that enables deploying and maintaining ML models in production reliably and efficiently. Plenty of information can be found online discussing the conceptual ins and outs of MLOps, so instead this article will focus on being pragmatic with a lot of hands-on code etc., basically setting up a proof of concept MLOps framework based on open source tools. The final code can be found on github. At its core it is all about getting ML models into production; but what does that mean?


ESB VS KAFKA

The appropriate answer to both questions is: “Yes, but….” In spite of their similarities, ESBs and stream-processing technologies such as Kafka are not so much designed for different use cases as for wholly different worlds. True, a flow of message traffic is potentially “unbounded” – e.g., an ESB might transmit messages that encapsulate the ever-changing history of an application’s state – but each of these messages is, in effect, an artifact of a world of discrete, partitioned – i.e., atomic – moments. “Message queues are always dealing in the discrete, but they also work very hard to not lose messages, not to lose data, to guarantee delivery, and to guarantee sequence and ordering in message transmits,” said Mark Madsen, an engineering fellow with Teradata. Stream-processing, by contrast, correlates with a world that is in a constant state of becoming; a world in which – as pre-Socratic philosopher Heraclitus famously put it – “everything flows.”  In other words, says Madsen, using an ESB to support stream processing is roughly analogous to using a Rube Goldberg-like assembly line of buckets – as distinct to a high-pressure feed from a hose – to fill a swimming pool.


A quick rundown of multi-runtime microservices architecture

A multi-runtime microservices architecture represents a two-component model that very much resembles the classic client-server relationship. However, the components that define multi-runtime microservices -- the micrologic and the mecha -- reside on the same host. Despite this, the micrologic and mecha components still operate on their own, independent runtime (hence the term "multi-runtime" microservices). The micrologic is not, strictly speaking, a component that lives among the various microservices that exist in your environment. Instead, it contains the underlying business logic needed to facilitate communication using predefined APIs and protocols. It is only liable for this core business logic, not for any logic contained within the individual microservices. The only thing it needs to interact with is the second multi-runtime microservices component -- the mecha. The mecha is a distributed, reusable and configurable component that provides off-the-shelf primitive types geared toward distributed services. The mecha uses declarative configuration to determine the desired application states and manage them, often relying on plain text formats such as JSON and YAML.


Basics Of Julia Programming Language For Data Scientists

Julia is a relatively new, fast, high-level dynamic programming language. Although it is a general-purpose language and can be used to write all kinds of applications, much of its package ecosystem and features are designed for high-level numerical computing. Julia draws from various languages, from the more low-level systems programming languages like C to high-level dynamic typing languages such as Python, R and MATLAB. And this is reflected in its optional typing nature, its syntax and its features. Julia doesn’t have classes; it works around this by supporting the quick creation of custom types and methods for these types. However, these functions are not limited to the types they are created for and can have many versions, a feature called multiple dispatching. It supports direct calls to C functions without any wrapper API, for example, the struct keyword used to define custom types. And instead of defining scope based on indentation like Python, Julia uses the keyword end, much akin to MATLAB. It would be ridiculous to summarize all its features and idiosyncrasies; you can refer to the wiki or docs welcome page for a more comprehensive description of Julia.


NCSC publishes smart city security guidelines

Mark Jackson, Cisco’s national cyber security advisor for the UK and Ireland, said: “The complexity of the smart cities marketplace, with multiple device manufacturers and IT providers in play, could quite easily present cyber security issues that undermine these efforts. The NCSC’s principles are one of the most sophisticated pieces of government-led guidance published in Europe to date. “The guidance set out for connected places generally aligns to cyber security best practice for enterprise environments, but also accounts for the challenges of connecting up different systems within our national critical infrastructure. “With DCMS [the Department for Digital, Culture, Media and Sport] also planning to implement legislation around smart device security, this is indicative of a broader government strategy to level up IoT security across the board. “This will enable new initiatives in the field of connected places and smart cities to gather momentum across the UK – with cyber security baked into the design and build phase. As lockdown restrictions ease and people return to workplaces and town centres, they need assurance that their digital identities and data are protected as the world around becomes more connected.


What if the hybrid office isn’t real?

A shift to hybrid work means that people will be returning to the office both with varying frequencies and for a new set of reasons,” says Brian Stromquist, co-leader of the technology workplace team at the San Francisco–based architecture and design firm Gensler. “What people are missing right now are in-person collaborations and a sense of cultural connection, so the workplace of the future — one that supports hybrid work — will be weighted toward these functions.” Offices will need a way to preserve a level playing field for those working from home and those on-site. One option is to make all meetings “remote” if not everyone is physically in the same space. That’s a possibility Steve Hare, CEO of Sage Group, a large U.K. software company, suggested to strategy+business last year. According to Stromquist, maintaining the right dynamic will require investing in technologies that create and foster connections between all employees, regardless of physical location. “We’re looking at tools like virtual portals that allow remote participants to feel like they’re there in the room, privy to the interactions and side conversations that you’d experience if you were there in person,” he says.


Real-time data movement is no longer a “nice to have”

Applications and systems can “publish” events to the mesh, while others can “subscribe” whatever they are interested in, irrespective of where they are deployed in the factory or data centre, or the cloud. This is essential for critical industries we rely on, such as capital markets, industry 4.0, and a functional supply chain. Indeed, there are few industries today who can do without as-it-happens updates on their systems. Businesses and consumers demand extreme responsiveness as a key part of a good customer experience, and many technologies depend on real-time updates to changes in the system. However, many existing methods for ensuring absolute control and precision of such time-sensitive logistics don’t holistically operate in real-time, at scale, without data loss, and therefore open room for fatal error. From retail, which relies on the online store being in constant communication with the warehouse and the dispatching team, to aviation, where pilots depend on real-time weather updates in order to carry the passengers to safety, today’s industries cannot afford anything other than real-time data movement. Overall, when data is enabled to move in this way, businesses can make better decisions.


The Cloud Comes of Age Amid Unprecedented Change

Just look at how businesses compete. The influx of cloud technologies during the pandemic has underlined that the technology stack is a core mode of differentiation. Industry competition is now frequently a battle between technology stacks, and the decisions leaders make around their cloud foundation, cloud services and cloud-based AI and edge applications will define their success. Look at manufacturing, where companies are using predictive analytics and robotics to inch ever closer to delivering highly customized on-demand products. The pandemic has forced even the most complex supply chain operations from manufacturers to operate at the whim of changing government requirements, consumer needs and other uncontrollable factors, such as daily pandemic fluctuations. Pivot quickly and you’ll not only emerge as leaders of your industry, you may even gain immeasurable consumer intimacy. A true cloud transformation should start with a plan to shift significant capabilities to cloud. It is more than just migrating a few enterprise applications. Implementing a “cloud first” strategy requires companies to completely reinvent their business for cloud by reimagining their products or services, workforce, and customer experiences.



Quote for the day:

"Don't try to be the "next". Instead, try to be the other, the changer, the new." -- Seth Godin

Daily Tech Digest - May 09, 2021

10 Business Models That Reimagine The Value Creation Of AI And ML

Humanizing experiences (HX) are disrupting and driving the democratization and commoditization of AI. These more human experiences rely on immersive AI. By 2030, immersive AI has the potential to co-create innovative products and services navigating through adjacencies and double up the cash flow, opposed to a potential 20% decline in cash flow with nonadopters, according to McKinsey. GAFAM has been an influential force in pioneering and championing deep learning with its core business fabric. NATU and BAT have deeply embedded AI into their most profound route. Google’s Maps and Indoor Navigation, Google Translate and Tesla’s autonomous cars all exemplify immersive AI. Global AI marketplace is an innovative business model that provides a common marketplace for AI product vendors, AI studios and sector/service enterprises to offer their niche ML models through a multisided platform and a nonlinear commercial model. Think Google Play, Amazon or the Appstore. SingularityNet, Akira AI and Bonseyes are multisided marketplace examples. 


Self-Supervised Learning Vs Semi-Supervised Learning: How They Differ

In the case of supervised learning, the AI systems are fed with labelled data. But as we work with bigger models, it becomes difficult to label all the data. Additionally, there is just not enough labelled data for a few tasks, such as training translation systems for low-resource languages. In a 2020 AAAI conference, Facebook’s chief AI scientist Yann LeCun introduced self-supervised learning to overcome these challenges. This technique obtains a supervisory signal from the data by leveraging the underlying structure. The general method for self-supervised learning is to predict unobserved or hidden part of the input. For example, in NLP, the words of a line are predicted using the remaining words in the sentence. Since self-supervised learning uses the data structure to learn, it can use various supervisory signals across large datasets without relying on labels. A self-supervised learning system aims at creating a data-efficient artificial intelligent system. It is generally referred to as extension or even improvement over unsupervised learning methods. However, as opposed to unsupervised learning, self-supervised learning does not focus on clustering and grouping.


Thinking About Switching Career to Data Science? Pick the Right Strategy!

As trivial as it looks, the gigantic volume of blog posts, articles, books, videos, tutorials, talks, slides and presentations, online courses, … are in your service, most of them for FREE, to guide you in the direction you want to go. Use them and use them often! Use these resources to not only learn new skills but also to learn more about the differences between career paths in data science- from product analysts, business analysts, statisticians, …-, get a sense of the trends in data science and to figure out where you see yourself a fit! Read consistently: data science is a vast field and the more you read and learn, the more valuable you become for your future employer! Use your network to connect to data scientists and speak with them about their roles, experiences, projects, and a career path in analytics. Use your network to connect to the opportunities you may not be aware of! Let them know you want to transition to data science and you appreciate if they can help you along the way. Use your network to find roles with an overlap between your roles, responsibilites and skills and data science roles


Artificial Intelligence Is The Transformative Force In Healthcare

Artificial intelligence, the technology that is seen as a home name today is poised to become a transformational force in healthcare. Healthcare industry is where a lot of challenges are encountered and opportunities open up. Starting from chronic diseases and radiology to cancer and risk assessment, artificial intelligence has shown its power by deploying precise, efficient, and impactful interventions at exactly the right moment in a patient’s care. The complexity and rise of data in healthcare have unveiled several types of artificial intelligence. Today, artificial intelligence and robotics have evolved to the stage where they can take better care of patients better than medical staff and human caretakers. The global artificial intelligence in the healthcare market is expected to grow from US$4.9 billion in 2020 and reach US$45.2 billion by 2026 with a projected CAGR of 44.9% during the forecast period. Artificial intelligence and relevant technologies are prevalent in business and society and are rapidly moving into the healthcare sector.


Hadoop vs. Spark: Comparing the two big data frameworks

The fundamental architectural difference between Hadoop and Spark relates to how data is organized for processing. In Hadoop, all the data is split into blocks that are replicated across the disk drives of the various servers in a cluster, with HDFS providing high levels of redundancy and fault tolerance. Hadoop applications can then be run as a single job or a directed acyclic graph (DAG) that contains multiple jobs. In Hadoop 1.0, a centralized JobTracker service allocated MapReduce tasks across nodes that could run independently of each other, and a local TaskTracker service managed job execution by individual nodes. ... In Spark, data is accessed from external storage repositories, which could be HDFS, a cloud object store like Amazon Simple Storage Service or various databases and other data sources. While most processing is done in memory, the platform can also "spill" data to disk storage and process it there when data sets are too large to fit into the available memory. Spark can run on clusters managed by YARN, Mesos and Kubernetes or in a standalone mode. Similar to Hadoop, Spark's architecture has changed significantly from its original design. 


How retailers are embracing artificial intelligence

Personalized recommendation engines have been a mainstay of shopping for years. There’s a folk legend in data mining circles, which claims Target has such powerful data mining and analytics, it once recommended baby clothing to a girl before she knew she was pregnant. Sadly, it’s just a myth, dating from a hype-filled 2012 New York Times report. But while big data and AI use cases for online shopping are still largely based in centralized data centers, a growing number of use cases are seeing retailers embrace Edge computing and AI, both at the Edge and in the cloud. Fulfillment centers are increasingly being used to automate warehouses in order to speed up deliveries and optimize space, which can make supply chains and logistics more efficient. In-store, robots are being used to stack shelves and clean floors. Machine vision is being brought in to scan shelves and manage inventory, suggest fashion ideas to customers, and in the case of Amazon Go and other competitors, remove the need for cashiers and traditional checkouts.


Designing for Behavior Change by Stephen Wendel

Designing for behavior change doesn’t require a specific product development methodology—it is intended to layer on top of your existing approach, whether it is agile, lean, Stage-Gate, or anything else. But to make things concrete, Figure 4 shows how the four stages of designing for behavior change can be applied to a simple iterative development process. At HelloWallet, we use a combination of lean and agile methods, and this sample process is based on what we’ve found to work. The person doing the work of designing for behavior change could be any one of these people. At HelloWallet, we have a dedicated person with a social science background on the product team (that’s me). But this work can be, and often is, done wonderfully by UX folks. They are closest to the look and feel of the product, and have its success directly in their hands. Product owners and managers are also well positioned to seamlessly integrate the skills of designing for behavior change to make their products effective. Finally, there’s a new movement of behavioral social scientists into applied product development and consulting at organizations like ideas42 and IrrationalLabs. 


Cybersecurity has much to learn from industrial safety planning

A scenario-based analysis makes it easier to understand the risk, without a high degree of technical jargon or acumen. The longstanding practices of safety engineers can provide an excellent template for this kind of analysis. For instance, by performing a hazard and operability (HAZOP) analysis process that examines and manages risk as it relates to the design and operation of industrial systems. One common method for performing HAZOPs is a process hazards analysis (PHA) that uses specialized personnel to develop scenarios that would result in an unsafe or hazardous condition. It is not a risk reduction strategy that simply looks at individual controls, but considers more broadly how the system works in unison and the different scenarios that could impact it. Cybersecurity threats are the work of deliberate and thoughtful adversaries, whereas safety scenarios often result from human or system error and failures. As a result, a safety integrity level can be measured with some confidence by failure rates, such as one every 10 years or 100 years.


Geographic databases hold worlds of information

Microsoft’s SQL server can store two types of spatial data, the so-called geometry for two-dimensional environments and the geography for three-dimensional parts of the world. The elements can be built out of simpler points or lines or more complex curved sections. The company has also added a set of geographic data formats and indexing to its cloud-based Azure Cosmos DB NoSQL database. It is intended to simplify geographic analysis of your data set for tasks such as computing store performance by location. Noted for a strong lineage in geographic data processing, ESRI, the creator of ArcGIS, is also expanding to offer cloud services that will first store geographic information and then display it in any of the various formats the company pioneered. ESRI, traditionally a big supplier to government agencies, has developed sophisticated tools for rendering geographic data in a way that’s useful to fire departments, city planners, health departments, and others who want to visualize how a variety of data looks on a map. There is a rich collection of open source databases devoted to curating geographic information.


Internet of Trusted Things: Democratizing IoT

Right now, the Internet of Things is more dolphin than human. Connections are disparate and clunky, and connecting devices does not create automatic value like connecting people. Intelligence has to be connected for the conjoining to add value. But IoT is becoming more intelligent by the day. Edge computing—where Moore’s law empowers each IoT sensor with the computing power to make artificially intelligent decisions without relying on a central cloud hub—creates this intelligence. In the words of Stan Lee, with great power comes great responsibility. So we return to the question: Who controls IoT? In a world with 86 billion devices, each equipped with on-the-edge intelligence, the answer to this question concerns the future of humanity. IoT is notoriously fractured. Countless use cases require domain expertise. As a result, no analogous winner takes all to the internet where network effects anointed masters in search (Google) and social (Facebook). According to Statista, at the end of 2019, there were 620 IoT platforms, including tech behemoths Microsoft and Amazon. 



Quote for the day:

"Real leaders are ordinary people with extraordinary determinations." -- John Seaman Garns

Daily Tech Digest - May 08, 2021

Gartner says composable data and analytics key to digital transformation

Gartner said business-facing data initiatives were key drivers of digital transformation in the enterprise. Research showed that 72% of data and analytics leaders are leading, or are heavily involved, in their organizations’ digital transformation efforts. These data leaders now confront emerging trends on various fronts. XOps: The evolution of DataOps to support AI and machine learning workflows is now XOps. The X could also stand for MLOps, ModelOps, and even FinOps. This promises to bring flexibility and agility in coordinating the infrastructure, data sources, and business needs in new ways. Engineering decision intelligence: Decision support is not new, but now decision-making is more complex. Engineering decision intelligence frames a wide range of techniques, from conventional analytics to AI to align and tune decision models and make them more repeatable, understandable, and traceable. Data and analytics as the core business function: With the chaos of the pandemic and other disruptors, data and analytics are becoming more central to an organization’s success. Companies will have to prioritize data and analytics as core functions rather than as secondary activity handled by IT.


Everything you need to know to land a job in data science

What does it take to get hired? Organizations are looking for job candidates with a bachelor's or master's degree in computer science, as well as experience with data modeling tools, XML, Python, Java, SQL, AWS and Hadoop. Many data scientist job descriptions also mention the ability to work with a distributed and fast-moving team. Interpreting data for colleagues in business units is increasingly important as well. Ryan Boyd, head of developer relations at Databricks, said that data science will soon be a commonplace skill outside engineering and IT departments as data becomes increasingly fundamental to businesses. "To stay competitive, data scientists need to be equally as obsessed with data storytelling as they are with the minutiae of data software and programs," said Boyd. "Tomorrow's best data scientists will be expected to translate their know-how into actionable insights and compelling stories for different stakeholders across the business, from C-suite executives to product managers." Whether you are looking for your first data science job or figuring out your next career move in the field, the following advice from hiring managers and data science professionals will help you plot a smart and successful course.


Observability and GitOps

The old supervision methods have reached their limits in the supervision of the new standards of application architecture. The management of highly scalable and portable microservices requires the adaptation of tools in order to facilitate debugging and diagnosis at all times, thus, requiring the observability of systems. Often, monitoring and observability are confused. Basically, the idea of a monitoring system is to get a state of the system based on a predefined set of metrics to detect a known set of issues. According to the SRE book by Google, a monitoring system needs to answer two simple questions: “What’s broken, and why?” Analyzing an application over the long term makes it possible to profile it in order to better understand its behavior regarding external events and, thus, be proactive in its management. Observability, on the other hand, aims to measure the understanding of a system state based on multiple outputs. This means observability is a system capability, like reliability, scalability, or security, that must be designed and implemented during system design, coding, and testing.


Defending Against Web Scraping Attacks

Web scraping can easily lead to more significant attacks. At my company, we routinely use Web scraping as one of the initial steps in a red team or phishing engagement. By pulling the metadata from posted documents, we can find employee names, usernames, and deduce username and email formats, which is particularly helpful when the username format would otherwise be difficult to guess. Mix this with scraping a list of current employees from sites like LinkedIn, and an adversary can perform targeted phishing and credential brute-force attacks. ... Scraping document metadata is also useful for detecting internal hostnames and software versions in use at the targeted company. This enables an attacker to customize the attack to exploit vulnerabilities specific to that company, and it is an important part of victim reconnaissance. Adversaries can also use scraping to collect gated information from a website if that information isn't properly protected. Take Facebook's password-reset page: Anyone can find privately listed people through a simple query with a phone number. While a password-reset page may be necessary, does it really need to confirm or, worse, return a user's private information?


From DevOps to MLOPS: Integrate Machine Learning Models using Jenkins and Docker

Continuous integration (CI) and continuous delivery (CD), known as CI/CD pipeline, embody a culture with agile operating principles and practices for DevOps teams that allows software development teams to change code more frequently and reliably or data scientist to continuously test the models for accuracy. CI/CD is a way to focus on business requirements such as improved models accuracy, automated deployment steps or code quality. Continuous integration is a set of practices that drive development teams to continuously implement small changes and check in code to version control repositories. Today, data scientists and IT ops have at their disposal different platforms (on premises, private and public cloud, multi-cloud …) and tools that need to be addressed by an automatic integration and validation mechanism allowing building, package and test applications with agility. Continuous delivery steps in when continuous integration ends by automating the delivery of applications to selected platforms.


Data Discovery for Business Intelligence

Any company that has had a BI tool for more than a year will deal with the dashboard clutter problem. Ad-hoc analysis, quarterly reports, and even core dashboards get outdated or change to a new version over time. The problem is, old dashboards usually don’t get deleted. No one wants to delete a dashboard in the shared folder because someone might be using it. This creates a long tail of clutter and inactive reports that people may poke around in, but they won’t be sure if the data is reliable or relevant. Navigating BI tools becomes its own tribal knowledge task and, it ends up being best to ask others to send you a specific link to open. What could be worse is that there may be someone relying on an outdated dashboard for their day-to-day operations. This often happens because dashboard metadata and its freshness isn’t tracked automatically. Connecting dashboard metadata along with its operational metrics like the last successful report run, last edited time, and top users can give visibility into the health of the dashboard. By comparing usage data along with operational metrics, outdated data models can easily be identified and cleaned out.


Big data is the key to everything. Here are four ways to improve how you use it

While most companies want to focus on the exciting bits, it's the infrastructure that matters. "I think it's almost like a bamboo tree; unless your roots are strong, your tree won't shoot up 90 feet. So for me, the focus on roots is super important," he says. When the foundation is right, you can then start to explore some of the interesting elements of data. During the past 12 months, for example, KFC has strengthened its own digital channels in response to the coronavirus pandemic. Traffic to the web app increased significantly through 2020 as click-and-collect and curb-side pick-up became more popular. ... "When the grape is cut from the vineyard, you don't have much time to make the fermentation process because the grape is degrading in the truck. So we have to move fast," he says. With brands such as Casillero del Diablo and Don Melchor, Concha y Toro operates in over 140 countries, making it one of the biggest wine companies in the world. Data is especially important at harvest time, when the company brings trucks with grapes from different parts of Chile to its wineries.


Four Technologies Disrupting Banking

Blockchain, or distributed ledger technology, has the potential to radically change who has control over our personally identifiable information (PII) and make financial institutions — and online transactions — much more trustworthy. Blockchain can help prove a person’s identity, allowing consumers to create a verified, digital identity they can use with any online institution. By leveraging public key cryptography and referencing a person’s verified credentials on a trustworthy, shared log (the distributed ledger), blockchain can help give people control over their digital identity credentials. Consumers could keep their identity credentials safe and use them as cryptographic evidence whenever their bank or another online business needs to verify their identity. They could also revoke access at any time. A blockchain infrastructure across the internet would give consumers a portable identity to use in digital channels and true control over their PII disclosure. This can help stop fraudulent payment transactions. Currently, if a transaction is disputed as fraud, there are few ways for a business to prove it is legitimate, which results in billions of dollars in losses annually due to chargebacks.


Email security is a human issue

Humans will inevitably make mistakes when it comes to phishing emails, but it is possible to mitigate these risks by ensuring that cyber defense strategies are at the front and center of business processes, as well as integrated within company culture. This will ensure teams are made aware of potential threats before they run the risk of falling victim to them. IT teams are often expected to take sole responsibility for a company’s cybersecurity strategy, yet it is impossible for these experts to monitor the email activity of each employee. With human error cited as a contributing factor in 95% of breaches, it is important to remember that email security – alongside many other areas of cyber defense – is a human issue and each member of the team poses a significant risk. While IT professionals should take the lead by distributing relevant information about the latest phishing campaigns targeting their industry, it is also the responsibility of managerial staff to flag IT concerns in their team meetings and integrate cybersecurity issues into regular company updates. These discussions can be started by IT leaders, but the topic of cybersecurity must be discussed by each department in order to ensure phishing emails do not fly under the radar.


Key Metrics to Track and Drive Your Agile Devops Maturity

Agile software delivery is a complex process that can hide significant inefficiencies and bottlenecks. Fortunately the process is easily measureable as there is a rich digital footprint in the tool-sets used across the process – from pre-development; development; integration & deployment; and out into live software management. However surfacing data from these myriad data sources and synthesising meaningful metrics that compare ‘apples with apples’ across complex Agile delivery environments is very tricky. Hence until recently, software delivery metrics have been much discussed but little used, until the arrival of Value Stream Management and BI solutions that enable the surfacing of accurate end-to-end software delivery metrics for the first time. ... Cycle Time is an ideal delivery metric for early stage practitioners. It simply measures the time taken to develop an increment of software. Unlike the more comprehensive measure of Lead Time, Cycle Time is easier to measure as it looks only at the time taken to take a ticket from the backlog, code and test that ticket – in preparation for integration and deployment to live.



Quote for the day:

"The litmus test for our success as Leaders is not how many people we are leading, but how many we are transforming into leaders" -- Kayode Fayemi