Daily Tech Digest - May 14, 2021

Thoughts on Cloud Security

With good security professionals in high demand, companies are better off investing in their security professionals that show an interest in “cloud”; in order to take their security organization to the next level. Solid training and support, will enable them to better collaborate with development teams and significantly raise the “security” bar of their cloud environment. There are plenty of free resources available today, such as cloud security standards and open source solutions, that can be leveraged. The Center for Internet Security (CIS) controls and/or AWS’ Well-Architected Framework are great resources to help get started. As a reformed cloud security professional, I can say that embracing the cloud takes a shift in mindset. In general, security teams need to stop saying “no” and getting in the way of innovation. Instead, they need to be able to provide development teams the access they need — when they need it, and put guardrails in place to ensure security. To be successful, it is key to do this in a way that it does not have a significant impact in the development experience. 


85% of Data Breaches Involve Human Interaction: Verizon DBIR

"Credentials are the skeleton key," Bassett says. Most know stolen credentials are a problem, but what they may not think about is how they spread across attack patterns and enable the start of many different types of data breaches, from phishing campaigns, to stealing the contents of a target mailbox, to a ransomware campaign in which an attacker encrypts then steals data. The trend toward simplicity is evident in the continued increase of business email compromise (BEC), which followed phishing as the second most common form of social engineering, reflecting a 15x spike in "misrepresentation," a type of integrity breach. BEC doubled last year and again this year. Of the 58% of BEC attacks that successfully stole money, the median loss was $30,000, with 95% of BECs costing between $250 and $984,855, researchers learned. Of the breaches analyzed, 85% had a human element. This is a broad term that encompasses any attack that involves a social action: phishing, BEC, lost or stolen credentials, using insecure credentials, human error, misuse, and even malware that has to be clicked then downloaded.


Hybrid working: creating a sustainable model

The evolution of thinking around the workplace we’ve seen in such a short space of time is quite something. Over the course of the last year, business mindsets have shifted from complete allegiance to the physical office, to fully embracing remote working to survive, to a realisation that a hybrid working model may well be the best way for businesses to thrive. Now, as we begin to move out of the pandemic, IT and business leaders should be considering what their workplace strategy looks like in the long term. What can we learn from the last 12 months? What are the tools, technologies and processes we should keep in place? How do we facilitate a reimagined office space? How do we empower employees to be productive and happy wherever they are? There’s no doubt that hybrid working opens up huge opportunity for businesses, from creating a flexible working environment that appeals to a broad range of talent to enabling more efficient ways of working and a healthier work-life balance. But how do we create a hybrid model that is sustainable in the long term?


The Global Artificial Intelligence Race and Strategic Balance

Countries are under pressure to protect their citizens and even political stability in the face of possible malicious/biased uses of AI and Big Data. Because 5G networks are the future backbone of our increasingly digitised economies and societies, ensuring its security and resilience is essential. Even at current capability levels, AI can be used in the cyber domain to augment attacks on cyberinfrastructure. There is no such thing as perfect security, only varying levels of insecurity. These ‘smart’ technologies rely on bidirectional wireless links to communicate with devices and global services, which gives a larger ‘attack surface’ that cyber threats target. Thus, 5G networks may lead to politically divided and potentially noninteroperable technology spheres of influence, where one sphere would be led by the US and another by China, with some others in between (for example the EU, South Korea and Japan).All of these concerns are most significant in the context of authoritarian states but may also undermine the ability of democracies to sustain truthful public debates. For example, ‘deepfake’ algorithms can create fake images and videos that cannot easily be distinguished from authentic ones by humans. It is threatening to global security if deepfake methods are employed to promulgate misinformation.


5 developer tools for detecting and fixing security vulnerabilities

Dependabot - now a native Github solution - has a simple straightforward workflow: automatically open Pull Requests for new dependency versions, and alert on vulnerable dependencies. Dependabot will also clearly differentiate between security-related PR and normal dependency upgrades by tagging [Security] in the title and label, along with including a changelog of the vulnerabilities fixed. ... Similar to Dependabot, Renovate is a GitHub or CLI app that monitors your dependencies and opens Pull Requests when new ones are available. While it supports fewer languages than Dependabot, the main advantage of Renovate is that it's extremely configurable. Ever wished you could write "schedule": "on the first day of the week" in your configs!? Well, Renovate allows you to do that! It also provides fine-grained control of auto-merging dependencies based on rules set in the config. ... Synk is a new one for me, but I really like that it's a product built with developers in mind, regardless of their previous experience with security. While Snyk is a paid product for business+, their free tier covers open-source, personal projects, and small teams, making it a great resource for personal projects and learning, even if you don't have the opportunity to use it on the job!


Adding Security to Testing to Enable Continuous Security Testing

Security testing is a variant of software testing which ensures that the system and applications in an organization are free from any loopholes that may cause a big loss, Thalayasingam said. Security testing of any system is about finding all possible loopholes and weaknesses of the system which might result in a loss of information at the hands of the employees or outsiders of the organization. To kick off security testing, security experts should train quality engineers about security and how to do manual security testing. Next, quality engineers can work with security experts to narrow down the tests for security testing and add value to existing test cases. This will lead to executing the security tests in sprint level activities, automating them, and making them part of continuous integration. Quality engineers should add the security checks to their test process for each story, Thalayasingam suggested. This would help to find the obvious security vulnerabilities and a very early stay. The right guidance and training will help quality engineers to gain the security testing mindset.


Building AI Leadership Brain Trust Is A Business Imperative: Are You Ready?

There are sufficient markers painting this stark prediction if one chooses to dig deeper. Did you know that over half of technology executives in the 2019 Gartner CIO Survey say they intend to employ AI before the end of 2020, up from 14% today? Board directors and CEO have to accelerate their investments in AI, and ensure they are managing the journey wisely with the right AI leadership skills in place and Machine Learning toolkits required to advance AI with sustainability enablements to modernize your business.. In a recent report by NewVantage Partners, 75% of companies cited fear of disruption from data-driven digital competitors as the top reason they’re investing. There are many questions that board directors and CEOs must ask in the face of any large investment consideration, and AI is not inexpensive. On average an AI project can range from as low as $30K to $1 million plus for a MVP, depending on the complexity of the data set, use case being solved to build a baseline AI model to predict an accurate outcome.


Maximizing a hybrid cloud approach with colocation

Companies are increasingly deploying a hybrid cloud approach to balance the benefits and challenges presented by both the public and private cloud. With the hybrid cloud, both types of cloud environments are integrated, allowing data to move seamlessly between platforms. This hybrid architecture can be designed as a bifurcated system in which the private cloud hosts a company’s sensitive data and mission critical components, and the public cloud hosts the rest. With this type of architecture, the data and applications live permanently in their assigned cloud environment, but the two systems are able to communicate seamlessly. Another option – the cloud bursting model – houses all of a company’s information in the private cloud, but when spikes in demand occur the public cloud provides supplementary capacity. Both hybrid approaches give companies greater control over and access to their IT environments and the ability to implement more stringent security protocols on the private cloud portion of their deployment. In addition, a hybrid approach gives organizations flexibility to build a solution that meets their current needs, but that can also evolve as their needs change.


Fake Android, iOS apps promise lucrative investments while stealing your money

The operators have created dedicated websites linked to each individual app, tailored to appear as the impersonated organizations in an effort to improve the apparent legitimacy of the software -- and the likelihood of a scam being successful. Sophos' investigation into the apps began with a report of a single malicious app masquerading as a trading company based in Asia, Goldenway Group. The victim, in this case, was targeted through social media and a dating website and lured to download the fake app. Rather than relying on mass spam emails or phishing, attackers may now also take a more personal approach and try to forge a relationship with their victim, such as by pretending to be a friend or a potential love match. Once trust is established, they will then offer some form of time-sensitive financial opportunity and may also promise guaranteed returns and excellent profits. However, once a victim downloads a malicious app or visits a fake website and provides their details, they are lured into opening an account or cryptocurrency wallet and transferring funds. 


When AI Becomes the Hacker

The core question Schneier asks is this: What if artificial intelligence systems could hack social, economic, and political systems at the computer scale, speed, and range such that humans couldn't detect it in time and suffered the consequences? It's where AIs evolve into "the creative process of finding hacks." "They're already doing that in software, finding vulnerabilities in computer code. They're not that good at it, but eventually they will get better [while] humans stay the same" in their vulnerability discovery capabilities, he says. In less than a decade from now, Schneier predicts, AIs will be able to "beat" humans in capture-the-flag hacking contests, pointing to the DEFCON contest in 2016 when an AI-only team called Mayhem came in dead last against all-human teams. That's because AI technology will evolve and surpass human capability. Schneier says it's not so much AIs "breaking into" systems, but AIs creating their own solutions. "AI comes up with a hack and a vulnerability, and then humans look at it and say, 'That's good,'" and use it as a way to make money, like with hedge funds in the financial sector, he says.



Quote for the day:

"Effective team leaders realize they neither know all the answers, nor can they succeed without the other members of the team." -- Katzenbach & Smith

Daily Tech Digest - May 13, 2021

Why VMware’s Tom Gillis Calls APIs ‘the Future of Networking’

Gillis says there are three steps to building “world-class security” in the data center. Step one is software-based segmentation, which can be as simple as separating the production environment from the development environment. “And then you want to make those segments smaller and smaller until we get them down to per-app segmentation, which we call microsegmentation,” he explained. Step two requires visibility into in-band network traffic. “We’re gonna go through on a flow-by-flow basis, and starting looking at, okay, this one is legitimate, and this one is WannaCry, and being able to figure that out using a distributed architecture,” Gillis said. Step three involves the ability to do anomaly detection, which allows analysts to find unknown threats as the attackers continually change their tactics. “Most security-conscious companies do this with a network TAP,” Gillis sas. These test access points (TAPs) allow companies to access and monitor network traffic by making copies of the packets. However, deploying all of these network TAPS, and storing the copies in a data lake becomes “very cumbersome, very operationally deficient,” Gillis said.


'FragAttacks' eavesdropping flaws revealed in all Wi-Fi devices

To be clear, these attacks would require the threat actor to be on the local network alongside the targets; these are not remotely exploitable flaws that could, for instance, be embedded in a webpage or phishing email. The attacker would either have to be on a public Wi-Fi network, have gotten access to a private network by obtaining the password or tricked their mark into connecting with a rogue access point. Thus far, there have been no reports of the vulnerabilities being exploited in the wild. Vanhoef opted to hold the public disclosure until vendors could be briefed and given time to patch the bugs. So far, at least 25 vendors have posted updates and advisories. Both Microsoft and the Linux Kernel Organization were warned ahead of time, and users can protect themselves by updating to the latest version of their operating systems. In a presentation set for the Usenix Security conference, Vanhoef explained how by manipulating the unauthenticated "aggregated" flag in a frame, instructions can be slipped into the frame and executed by the target machine. This could, for example, allow an attacker to redirect a victim to a malicious DNS server.


The state of digital transformation in Indonesia

Indonesian firms are facing people-related challenges most often in their digital transformation. Lack of technology skills and knowledge, and a shortage of employee availability, indicate a critical talent crunch. Firms may pay a price already for not putting employee experience higher on their business agenda. Besides data issues, challenges also include securing the digital transformation. Only 17% of firms in Indonesia are currently adopting a zero trust strategic cybersecurity framework. Fewer than 20% mention securing budgeting and funding for DT as a key challenge. This indicates that early successes are recognized in boardrooms and with executive leaders and that the vast majority of firms in Indonesia have prepared their transformation budgets well. Indonesian firms face fewer budget challenges for digital transformation than their peers in other markets. Firms are also prioritizing cloud and are building new applications and services primarily on public cloud. Tech executives in Indonesia therefore face their most immediate challenges around people, skills, and culture. Upskilling, retention, and aligning employee priorities to digital transformation are crucial for ongoing success—and firms must act immediately. Agile has successfully taken hold in IT organizations, but tech executives must take the lead and collaborate across lines of business to drive adaptiveness across the organization.


Law firms are building A.I. expertise as regulation looms

Just because A.I. is an emerging area of law doesn’t mean there aren’t plenty of ways companies can land in legal hot water today using the technology. He says this is particularly true if an algorithm winds up discriminating against people based on race, sex, religion, age, or ability. “It’s astounding to me the extent to which A.I. is already regulated and people are operating in gleeful bliss and ignorance,” he says. Most companies have been lucky so far—enforcement agencies have generally had too many other priorities to take too hard a look at more subtle cases of algorithmic discrimination, such as a chat bot that might steer certain white customers and Black customers to different car insurance deals, Hall says. But he thinks that is about to change—and that many businesses are in for a rude awakening. Working with Georgetown University’s Centre for Security and Emerging Technology and Partnership on A.I., Hall was among the researchers who have helped document 1,200 publicly reported cases of A.I. “system failures” in just the past three years. The consequences have ranged from people being killed to false arrests based on facial recognition systems misidentifying people to individuals being excluded from job interviews.


BRD’s Blockset unveils its white-label cryptocurrency wallet for banks

“The concept is really a result of learnings from working with our customers, tier one financial institutions, who need a couple things,” Traidman told TechCrunch. “Generally they want to custody crypto on behalf of their customers. For example, if you’re running an ETF, like a Bitcoin ETF, or if you’re offering customers buying and selling, you need a way to store the crypto, and you need a way to access the blockchain.” “The Wallet-as-a-Service is the nomenclature we use to talk about the challenge that customers are facing, whereby blockchain is really complex,” he added. “There are three V’s that I talk about: variety, a lot of velocity because there’s a lot of transactions per second, and volume because there’s a lot of total aggregate data.” Blockset also enables clients to add features like trading crypto or fiat or lending Bitcoin or Stablecoins to take advantage of high interest rates. Enterprises can develop and integrate their own solutions or work with Blockset’s partners. Other companies that offer enterprise blockchain infrastructure include Bison Trails, which was recently acquired by Coinbase, and Galaxy Digital.


Democratize Machine Learning with Customizable ML Anomalies

Customizable machine learning (ML) based anomalies for Azure Sentinel are now available for public preview. Security analysts can use anomalies to reduce investigation and hunting time as well as improve their detections. Typically, these benefits come at the cost of a high benign positive rate, but Azure Sentinel’s customizable anomaly models are tuned by our data science team and trained with the data in your Sentinel workspace to minimize the benign positive rate, providing out-of-the box value. If security analysts need to tune them further, however, the process is simple and requires no knowledge of machine learning. ... A new rule type called “Anomaly” has been added to Azure Sentinel’s Analytics blade. The customizable anomalies feature provides built-in anomaly templates for immediate value. Each anomaly template is backed by an ML model that can process millions of events in your Azure Sentinel workspace. You don’t need to worry about managing the ML run-time environment for anomalies because we take care of everything behind the scenes. In public preview, all built-in anomaly rules are enabled by default in your workspace. 


How to stop AI from recognizing your face in selfies

Most of the tools, including Fawkes, take the same basic approach. They make tiny changes to an image that are hard to spot with a human eye but throw off an AI, causing it to misidentify who or what it sees in a photo. This technique is very close to a kind of adversarial attack, where small alterations to input data can force deep-learning models to make big mistakes. Give Fawkes a bunch of selfies and it will add pixel-level perturbations to the images that stop state-of-the-art facial recognition systems from identifying who is in the photos. Unlike previous ways of doing this, such as wearing AI-spoofing face paint, it leaves the images apparently unchanged to humans. Wenger and her colleagues tested their tool against several widely used commercial facial recognition systems, including Amazon’s AWS Rekognition, Microsoft Azure, and Face++, developed by the Chinese company Megvii Technology. In a small experiment with a data set of 50 images, Fawkes was 100% effective against all of them, preventing models trained on tweaked images of people from later recognizing images of those people in fresh images. 


Agile Transformation: Bringing the Porsche Experience into the Digital Future with SAFe

Agile means, in fact, many things, but above all, it is a shared commitment. What really matters are the underlying values such as openness, self-commitment, focus. Not to forget the main principles behind agile work: customer orientation, embracing change and continuous improvement, empowerment and self-organization, simplicity, and transparency. In other words, what we learned quite early on is the importance to establish not only ambitious goals but also a shared vision across teams. That requires bringing together different goals and building alignment around a common purpose. Furthermore, we have learned that it is important to focus on incremental change. We now focus on a small number of topics and pursue them persistently. Transformation takes time. Lifelong learning also means that change is an ongoing process — it never ends. Sometimes, change may be hard, but we are not alone. It affects many areas outside the Digital Product Organization and it is essential that we take others along on the journey. Finally, it is important to keep in mind that successful and long-lived companies are usually the ones that learn to be agile and stable at the same time.


Recruiting and retaining diverse cloud security talent

The first step to encouraging more diversity within the cyber security workforce is representation. Businesses need to look at their teams and collaborate with their community and industry to create a platform that will inspire individuals into industries they may not have considered before. For example, company representatives at events act as role models, and their individual passion can be a strong inspiration and draw for a wide range of candidates. For this reason, it’s vital that security and cloud teams – and in particular members from diverse backgrounds – have a voice on traditional media and social platforms. Diverse voices should be seen and heard in newspapers, on corporate blogs, and in broadcast, where they can share insight into their careers and expertise, encouraging new talent to join the industry and their business specifically. Similarly, mentorship programmes help businesses to attract and retain talent. For those moving into the industry, changing companies, or transitioning into a new role, having a mentor provides support, the comfort of representation, and showcases their achievements. 


3 areas of implicitly trusted infrastructure that can lead to supply chain compromises

Once the server a software repository is hosted on is compromised, an attacker can do just about anything with the repositories on that machine if the users of the repository are not using signed git commits. Signing commits works much like with author-signed packages from package repositories but brings that authentication to the individual code change level. To be effective, this requires every user of the repository to sign their commits, which is weighty from a user perspective. PGP is not the most intuitive of tools and will likely require some user training to implement, but it’s a necessary trade-off for security. Signed commits are the one and only way to verify that commits are coming from the original developers. The user training and inconvenience of such an implementation is a necessary inconvenience if you want to prevent malicious commiters masquerading as developers. This would have also made the HTTPS-based commits of the PHP project’s repository immediately suspicious. Signed commits do not, however, alleviate all problems, as a compromised server with a repository on it can allow the attacker to inject themselves into several locations during the commit process.



Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek

Daily Tech Digest - May 12, 2021

Web Application Security is not API Security

Broken Object Level Authorization (BOLA) is number one on the API Top 10 list. Uber’s API had this vulnerability. Thankfully, it was discovered by security researchers before malicious actors did damage (as far as we know). But it illustrates well how dangerous BOLA can be. The vulnerability leaked sensitive information, including an authentication token that could be used to perform a full account takeover. The vulnerability appears when a new driver joins the Uber platform. The browser sends the “userUuid” to the API endpoint, and the API returns data about the driver used to populate the client. ... Application security technologies such as Web Application Firewall, Next Generation Web Application Firewall, and Runtime Application Self-Protection don’t typically find the kinds of vulnerabilities we’ve discussed. These attacks present as ordinary traffic, so many defenses let them through. The future of API security lies with business logic. Application security tools have to understand the application’s context and business logic to know that non-managers shouldn’t be adding collaborators to a store or that a client shouldn’t access that user ID’s information.


Why IT should integrate information security with digital initiatives

Integrating information security with digital initiatives can also go a long way in dealing with the rise in ransomware attacks, as explained by W. Curtis Preston, chief technical evangelist at Druva. “Ransomware attacks are particularly becoming a daily occurrence, and it’s only gotten worse since the pandemic,” said Preston. “Just last year, the FBI reported a 400 percent increase in ransomware, and the rates of these attacks are not predicted to slow down anytime soon. These attacks not only cause significant financial damage, but can diminish a brand’s reputation and customer trust. “For organisations looking to remain secure while keeping pace with today’s digitised business landscape, integrating security with digital initiatives is imperative. A holistic approach to security that includes detection, resilience, and data recovery will allow organisations to mitigate cyber risk and thrive in today’s digital landscape. “Security must also be embedded into the organisation’s culture. This means prioritising security, and ensuring that security experts are involved in critical business decision making from an early stage....”


UK to fund national cyber teams in Global South

Raab said the UN’s recent unanimous agreement on principles for how states should operate in cyber space was an important stepping stone, but the UK now wanted wider agreement on how to respond to nation-states that “systematically commit malicious cyber attacks”. “We have got to win hearts and minds across the world for our positive vision of cyber space as a free space, open to all responsible users and there for the benefit of the whole world,” said Raab. “And, frankly, we’ve got to prevent China, Russia and others from filling the multilateral vacuum. That means doing a lot more to support the poorest and most vulnerable countries. “Today I am very pleased to announce that the UK government will invest £22m in new funding to support cyber capacity building in those vulnerable countries, particularly in Africa and the Indo-Pacific. “That money will go to supporting national cyber response teams, advising on mass online safety awareness campaigns, and collaborating with Interpol to set up a new cyber operations hub in Africa. The idea of that will be to improve co-operation on cyber crime investigations, and support the countries involved to mount joint operations.”


Why You Should Be Prepared to Pay a Ransom

A clear-eyed approach to the ransomware threat also makes it easier to handle the PR fallout from an attack. That's partly because you can plan ahead, and figure out how to create a crisis communications strategy that's aligned with the reality of the situation you find yourself in. Just as importantly, though, it's far easier to explain your ransom payments to customers and shareholders if you've been upfront about the risks you face, and haven't previously claimed that you'd never, ever pay to retrieve stolen data. Perhaps the most important reason to be honest about your ransomware response strategy, though, is that it gives you full visibility into the true cost of ransomware attacks, which in turn allows you to make more realistic cybersecurity ROI calculations. When you know how much a ransomware attack will cost you — including the ransom, the fines, and the potential damage to your brand — then you can make smarter and more informed decisions about how much you should be investing in cybersecurity designed to keep your data safe. It's always better, after all, to make sensible investments in security upfront and avoid getting hacked in the first place.


Prediction: The future of CX

Predictive CX platforms allow companies to better measure and manage their CX performance; they also inform and improve strategic decision making. These systems make it possible for CX leaders to create an accurate and quantified view of the factors that are propelling customer experience and business performance, and they become the foundation to link CX to value and to build clear business cases for CX improvement. They also create a holistic view of the satisfaction and value potential of every customer that can be acted upon in near real time. Leaders who have built such systems are creating substantial value through a wide array of applications across performance management, strategic planning, and real-time customer engagement. ... Prioritizing CX efforts through intentional strategic planning is another promising use case for data-driven systems that allow CX leaders to understand which operational, customer, and financial factors are creating systemic issues or opportunities over time. One US healthcare payer, for example, built a “journey lake” to determine how to improve its customer care.


Artificial intelligence revolution offers benefits and challenges

"Knowing where and how to use AI is not always easy. Innovation in business is typically incremental and rarely transformative to the extent that more vociferous AI hype suggests. Because AI and the automation it entails come at a cost, businesses will need to find the optimal level of AI that integrates the new with the old, balancing the costs of acquisition and disruption with productivity, quality and flexibility needs and expectations." "This leads us to a second caveat. AI, especially 'affordable AI," still has limited capabilities. AI relies on data utilization and exploitation. From a business perspective, this requires knowing what data the business has—and its potential value to improving products or processes. These are not givens." The authors nominate ethical dilemmas as a third risk facing the uptake of AI. They note the negative experiences "of Microsoft's racist chatbot (a poorly developed chatbot that mimicked the provocative language of its users), Amazon's AI-based recruitment tool that ignored female job applicants (and) Australia's Robodebt. Garbage in, garbage out." At the very least, these AI tools were not trained appropriately. But training isn't everything. 


Three post-pandemic predictions for the world of work

Let’s face it — many executives in senior leadership positions do not invest much time or energy in the leadership part of their role. Instead, they might simply be swept along by the busywork of endless meetings, or they are focused more on advancing their own careers and engaging in corporate politics. But the actual work of leadership is more intentional. With so many companies adopting policies that allow for remote work, the burden that is shifted onto leadership is greater. The C-suite needs to articulate the strategy in ways that provide clear signals to everyone in terms of what they should be working on and why it’s important. And with so many people working out of the office, fostering and embedding the corporate culture will have to become a priority, given that colleagues may seem more like ships passing in the night (if they meet in person at all). Leaders will have to work overtime to share the company’s values, and the stories behind them. And they’ll need to reinforce those values at every employee touch point if people are going to adopt them as they did when everyone was together.


The best CISOs think like Batman, not Superman

Why should CISOs learn to think like Batman? For starters, Batman knows that fighting crime isn’t a popularity contest and doesn’t expect thanks from the people he’s trying to protect. In the same way, CISOs should accept that if they’re popular, they’re probably doing their job wrong. People should feel a bit of angst when the CISO’s shadow falls over their desk — because the CISO should be prodding them to make uncomfortable decisions, badgering them to do better, and preventing them from settling into complacency. Your role isn’t to keep people happy — it’s to keep them safe, despite the groaning and muttering your efforts inspire. Batman also knows that you can’t fight crime by basking in the sunshine. Instead, you’ve got to know the city’s underbelly and fight crooks and gangsters on their own turf. In just the same way, CISOs need to live with a foot in the underworld. It’s only by understanding the way that hackers think and operate that you can hope to keep your organization safe, and that means knowing your way around the murkier corners of the dark web and spending plenty of time tracking the scripts, strategies, and other dirty tricks being shared by the black-hat crowd.


AI offers an array of document processing opportunities

Utilising neural networks, AI-driven document processing platforms offer a leapfrog advance over traditional recognition technologies. At the outset, a system is ‘trained’ so that a consolidated core knowledge base is created about a particular (spoken) language, form and/or document type. In AI jargon, this is known as the ‘inference’. This knowledge base then expands and grows over time as more and more information is fed into the system and it self-learns – able to recognise documents and their contents as they arrive. This is achieved given a feedback ‘re-training loop’ is used – think of it as supervised learning overseen by a human – whereby errors in the system are corrected when they arise so that the inference (and the meta data underlying it) updates, learns and is able to then deal with similar situations on its own when they next appear. It’s not dissimilar to how the human brain works, and how children learn a language. In other words, the more kids talk, make mistakes and are corrected, the better they get at speaking. The same is true with AI when applied to document analysis and processing. The inference becomes ever more knowledgeable and accurate.


The Long Road to Rebuilding Trust after 'Golden SAML'-Like Attacks

Re-establishing trust in the aftermath of a Golden SAML attack or similar attacks can be potentially disruptive. If an organization suspects that a Golden SAML attack has been used against them, the most important step is to rotate the token signing and token encryption certificates in ADFS twice in rapid succession, says Doug Bienstock, manager at FireEye Mandiant's consulting group. This action should be done in tandem with traditional eradication measures for blocking any known malware and resetting passwords across the enterprise, he says. Organizations that don't rotate — or change — the keys twice in rapid succession run the risk of a copy of the previous potentially compromised certificates being used to forge SAML tokens. CyberArk's Reiner says key rotation could cause disruption if security teams are not prudent about how it is implemented. "Rotating means revoking the old key and creating a new one," he says. "That means you have removed the trust between your own network and other cloud services." In normal situations, when an organization wants to rotate existing keys, there's a grace period during which the old key will continue to work while the new one is rolled out.



Quote for the day:

“One of the most sincere forms of respect is actually listening to what another has to say.” -- Bryant H. McGill

Daily Tech Digest - May 11, 2021

5 reasons why companies need to develop and prioritize their AI Business strategies now!

We live in a technological moment comparable to the early days of the Internet, which sparked fascination — and even some apocalyptic predictions and fears — when it first appeared. Today, we can all see that the Internet is such a natural part of our everyday lives that we only notice its existence when it’s missing. I’m betting my career that the same will soon happen with AI. But for companies to benefit from this disruption, they have to stop focus on the technology itself and see it as an amplifier of the value that can be offered to customers, which is at the heart of business strategies. Many of the discussions about Artificial Intelligence in teams, departments, and companies today still start with the question, “What data do we have, and what technology do we need to work with it?” Please don’t do that anymore! It’s time to turn the tables and look at what the customers’ needs are. What are the questions your customers are asking that, when answered, will generate value for them and the business? Only when you have this clarity you should start looking for the correct data and related AI applications.


Critical Infrastructure Under Attack

Unlike ransomware, which must penetrate IT systems before it can wreak havoc, DDoS attacks appeal to cybercriminals because they're a more convenient IT weapon since they don't have to get around multiple security layers to produce the desired ill effects. The FBI has warned that more DDoS attacks are employing amplification techniques to target US organizations after noting a surge in attack attempts after February 2020. The warnings came after other reports of high-profile DDoS attacks. In February, for example, the largest known DDoS attack was aimed at Amazon Web Services. The company's infrastructure was slammed with a jaw-dropping 2.3 Tb/s — or 20.6 million requests per second — assault, Amazon reported. The US Cybersecurity and Infrastructure Security Agency (CISA) also acknowledged the global threat of DDoS attacks. Similarly, in November, New Zealand cybersecurity organization CertNZ issued an alert about emails sent to financial firms that threatened a DDoS attack unless a ransom was paid. Predominantly, cybercriminals are just after money. 


9 tips for speeding up your business Wi-Fi

For larger networks, consider using a map-based Wi-Fi surveying tool such as those from AirMagnet, Ekahau or TamoGraph during deployment and for periodic checks. Along with capturing Wi-Fi signals, these tools enable you to run a full RF spectrum scan to look for non-Wi-Fi interference as well. For ongoing interference monitoring, use any functionality built into the APs that will alert you to rogue APs and/or other interference. Map-based Wi-Fi surveying tools usually offer some automated channel analysis and planning features. However, if you're doing a survey on a smaller network with a simple Wi-Fi stumbler, you'll have to manually create a channel plan. Start assigning channels to APs on the outer edges of your coverage area first, as that’s where interference from neighboring wireless networks is most likely to be. Then work your way into the middle, where it’s more likely that co-interference from your own APs is the problem. ... If you have more than one SSID configured on the APs, keep in mind that each virtual wireless network must broadcast separate beacons and management packets. This consumes more airtime, so use multiple SSIDs sparingly.


Increasing Developer Effectiveness by Optimizing Feedback Loops

The key here is usefulness and empowerment. If the process is useful, accurate and easy to interrupt, and the team is empowered to optimize, then it will naturally get shortened. Developers will want to run useful validations more often, earlier in the cycle. Take the example of regression tests that are tests owned by another siloed team, that are flaky, slow, run out of cycle and hard to figure out what is wrong in them. It is unlikely that they will get optimized, because the developers don’t perceive much value in them; whereas with a test suite based on the test pyramid that is owned by the team, is in the same code base, and is the gate to deployment to all environments, the team will come up with ways of improvement. You can apply the concept of feed loops to different scales, for example, super small loops, e.g. when the developer is coding, what feedback can we give them to help and nudge them? How does your IDE inform you have made a mistake, or how does it help you find the syntax for the command you are looking for? When we look at the developer flow, discovering information is a big source of friction.


Coding interviews suck. Can we make them better?

Another issue is that technical interviews aren't standardized, meaning they can vary wildly from company to company – making it almost impossible for candidates to prepare fully. As a result, their fate rests largely in the hands of whoever is carrying out the interview on that day. "The interviews typically are not well thought out and not structured," says Tigran Sloyan, co-founder and CEO of assessment platform CodeSignal. "What typically happens is, you have a developer whose job it is to evaluate this person. Most developers don't have either practice or understanding of what it means to conduct a structured interview." When there's so much variability, biases begin making their way into the process, says Sloyan. "Where there's lack of structure, there is more bias, and what ends up happening is if whoever you're interviewing looks like you and talks like you, you [the interviewer] start giving them more hints, you start leading them down the right paths." The reverse is also true, Sloyan says. "If they don't look like you and talk like you, you might throw them a couple more curveballs, and then good luck making it through that process."


RPA and How It's Adding Value in the Workplace

While automation is the use of advanced technologies to replace humans in low-value, repetitive, and tedious tasks with the goal of increasing profitability and lowering operating costs. It benefits businesses by allowing them more flexibility in how they operate and by increasing employee productivity by automating the most time-consuming and repetitive tasks so that employees can focus on more important tasks. In practice, the objective is to improve workflows and processes. Alex Kwiatkowski is principal industry consultant for financial services at UK-based SAS. He points out that banks are constantly seeking efficiency gains. Firms have long sought to remove time-consuming, and occasionally error-prone, manual tasks in favor of technology-infused, straight-through processing in the front, middle and back offices. More than just fanciful hyperbole, automation has proven an enabler to achieving such goals. No matter the bank, there's always room for improvement. Keep in mind, too, that these advances need not be giant leaps forward but are often the aggregation of marginal gains, if you will. 


Shedding Light on the DarkSide Ransomware Attack

Like other gangs that operate modern ransomware codes, such as Sodinokibi and Maze, DarkSide blends crypto-locking data with data exfiltration and extortion. If they are not initially paid for a decryption key, the attackers threaten to publish confidential data they stole from the victim and post it on their dedicated website, DarkSide Leaks, for at least 6 months. When a ransom note appears on an encrypted networked device, the note also communicates a TOR URL to a page called “Your personal leak page” as part of the threat that if the ransom is not paid, data will be uploaded to that URL. Ransom is demanded in Bitcoin or Monero. If it is not paid by a specific initial deadline, the amount doubles. ... Most ransomware operators understand that they need speed to encrypt as much data as possible as quickly as they can. They, therefore, opt to use symmetric encryption for that first phase and then encrypt the first key with an asymmetric key. In DarkSide’s case, they claim to have come up with an accelerated implementation; the malware uses the Salsa20 stream cipher to encrypt victim data.


Can We Trust the Cloud Not to Fail?

Reducing, or transforming a failure detector algorithm with one set of completeness and accuracy properties into a failure detector algorithm with another set of such properties means finding a reduction or transformation algorithm that can complement the original failure detection algorithm and guarantee that it will behave the same way as the target failure detection algorithm in the same environment given the same failure patterns. This concept is formally called reducibility of failure detectors. Because in reality it can be difficult to implement strongly complete failure detectors in asynchronous systems, per T.D. Chandra and S. Toueg we can transform failure detectors with weak completeness class into failure detectors with strong completeness. This concept is formally called reducibility of failure detectors. We can say that the original failure detector algorithm based on timeouts (described earlier) was reduced or transformed into an Eventually Weak Failure Detector by using increasing timeouts. As T.D. Chandra and S. Toueg showed that it is also possible to transform failure detectors with weak completeness into failure detectors with strong completeness.


The 6 Dimensions of a Winning Resilience Strategy

As CIOs look to pivot to this new and more complete approach to building resilience, they will be coming from different places in the journey depending on their existing business strategy and investments. The good news is that this journey does not need to be undertaken in one swoop. CIOs can start with the quick wins, such as identifying and deploying the right collaboration tools, before moving on to longer-term processes, such as how best to migrate applications to the cloud and embrace cloud-native models. The starting point should be a resilience roadmap so you can plan when best to address the various dimensions of your strategy. This needs to be supported by a focus on your people to ensure they are able to leverage new technologies to their fullest and understand how to thrive in their work no matter where they are. Identifying what needs to change and putting in place an effective resilience strategy is now a critical business differentiator. In the past, business resilience was about doing the best in tough times, and often CIOs’ focus was on robust oversight and control to ensure that there were security breaches while business-as-usual was disrupted.


Cloud data and security — what every healthcare technology leader needs to know

Knowing where an organisation’s data resides, who owns that data and what type of data it is, will ease any security incident and any legal or compliance implications. It will also ease an organisation’s ability to manage risk and improve their response over time. Commenting on the importance of knowing your data, William Klusovsky, Global Cybersecurity Strategy, Governance, Risk & Compliance Offering Lead at Avanade, said: “Often, technology leaders will forget that asset management is not just about keeping track of hardware, it means knowing where your data is, where your data flows and who owns that data.” The challenge of having a holistic view of an organisation’s data landscape is intensified by the problem of Shadow IT — the procurement of software and tech without IT’s knowledge. As new systems and applications are onboarded by various departments it’s easy to lose track of these, and what data sits within them, without a strong systems acquisition process. With healthcare specifically, the rapid introduction of IoT medical devices and all the new data they’re generating, exemplifies this. 



Quote for the day:

"Stories are the single most powerful weapon in a leader's arsenal." -- Howard Gardner

Daily Tech Digest - May 10, 2021

How to minimise technology risk and ensure that AI projects succeed

Organisations are using lots of different technologies and multiple processes to try and manage all this, and that’s what’s causing the delay around getting models into production and being used by the business. If we can have one platform that allows us to address all of those key areas, then the speed at which an organisation will gain value from that platform is massively increased. And to do that, you need an environment to develop the applications to the highest level of quality and internal customer satisfaction, and an environment to then consume those applications easily by the business. Sounds like the cloud, right? Well, not always. When you look at aligning AI, you also have to think about how AI is consumed across an organisation; you need a method to move it from R&D into production, but when it’s deployed, how do we actually use it? What we are hearing is that what they actually want is a hybrid development and provisioning environment, where this combination of technologies could run with no issues, no matter what your development or target environment is, such as on cloud, on-premise, or a combination.


Getting a grip on basic cyber hygiene

In regard to cyber defense, basic cyber hygiene or a lack thereof, can mean the difference between a thwarted or successful cyber-attack against your organization. In the latter, the results can be catastrophic. Almost all successful cyber-attacks take advantage of conditions that could reasonably be described as “poor cyber hygiene” – not patching, poor configuration management, keeping outdated solutions in place, etc. Inevitably, poor cyber hygiene invites risks and can put the overall resilience of an organization into jeopardy. Not surprisingly, today’s security focus is on risk management: identifying risks and vulnerabilities, and eliminating and mitigating those risks where possible, to make sure your organization is adequately protected. The challenge here is that cybersecurity is often an afterthought. To improve a cybersecurity program, there needs to be a specific action plan that the entire cyber ecosystem of users, suppliers, and authorities (government, regulators, legal system, etc.) can understand and execute. That plan should have an emphasis on basic cyber hygiene and be backed up by implementation guidance, tools and services, and success measures.


Get started with MLOps

Getting machine learning (ML) models into production is hard work. Depending on the level of ambition, it can be surprisingly hard, actually. In this post I’ll go over my personal thoughts (with implementation examples) on principles suitable for the journey of putting ML models into production within a regulated industry; i.e. when everything needs to be auditable, compliant and in control — a situation where a hacked together API deployed on an EC2 instance is not going to cut it. Machine Learning Operations (MLOps) refers to an approach where a combination of DevOps and software engineering is leveraged in a manner that enables deploying and maintaining ML models in production reliably and efficiently. Plenty of information can be found online discussing the conceptual ins and outs of MLOps, so instead this article will focus on being pragmatic with a lot of hands-on code etc., basically setting up a proof of concept MLOps framework based on open source tools. The final code can be found on github. At its core it is all about getting ML models into production; but what does that mean?


ESB VS KAFKA

The appropriate answer to both questions is: “Yes, but….” In spite of their similarities, ESBs and stream-processing technologies such as Kafka are not so much designed for different use cases as for wholly different worlds. True, a flow of message traffic is potentially “unbounded” – e.g., an ESB might transmit messages that encapsulate the ever-changing history of an application’s state – but each of these messages is, in effect, an artifact of a world of discrete, partitioned – i.e., atomic – moments. “Message queues are always dealing in the discrete, but they also work very hard to not lose messages, not to lose data, to guarantee delivery, and to guarantee sequence and ordering in message transmits,” said Mark Madsen, an engineering fellow with Teradata. Stream-processing, by contrast, correlates with a world that is in a constant state of becoming; a world in which – as pre-Socratic philosopher Heraclitus famously put it – “everything flows.”  In other words, says Madsen, using an ESB to support stream processing is roughly analogous to using a Rube Goldberg-like assembly line of buckets – as distinct to a high-pressure feed from a hose – to fill a swimming pool.


A quick rundown of multi-runtime microservices architecture

A multi-runtime microservices architecture represents a two-component model that very much resembles the classic client-server relationship. However, the components that define multi-runtime microservices -- the micrologic and the mecha -- reside on the same host. Despite this, the micrologic and mecha components still operate on their own, independent runtime (hence the term "multi-runtime" microservices). The micrologic is not, strictly speaking, a component that lives among the various microservices that exist in your environment. Instead, it contains the underlying business logic needed to facilitate communication using predefined APIs and protocols. It is only liable for this core business logic, not for any logic contained within the individual microservices. The only thing it needs to interact with is the second multi-runtime microservices component -- the mecha. The mecha is a distributed, reusable and configurable component that provides off-the-shelf primitive types geared toward distributed services. The mecha uses declarative configuration to determine the desired application states and manage them, often relying on plain text formats such as JSON and YAML.


Basics Of Julia Programming Language For Data Scientists

Julia is a relatively new, fast, high-level dynamic programming language. Although it is a general-purpose language and can be used to write all kinds of applications, much of its package ecosystem and features are designed for high-level numerical computing. Julia draws from various languages, from the more low-level systems programming languages like C to high-level dynamic typing languages such as Python, R and MATLAB. And this is reflected in its optional typing nature, its syntax and its features. Julia doesn’t have classes; it works around this by supporting the quick creation of custom types and methods for these types. However, these functions are not limited to the types they are created for and can have many versions, a feature called multiple dispatching. It supports direct calls to C functions without any wrapper API, for example, the struct keyword used to define custom types. And instead of defining scope based on indentation like Python, Julia uses the keyword end, much akin to MATLAB. It would be ridiculous to summarize all its features and idiosyncrasies; you can refer to the wiki or docs welcome page for a more comprehensive description of Julia.


NCSC publishes smart city security guidelines

Mark Jackson, Cisco’s national cyber security advisor for the UK and Ireland, said: “The complexity of the smart cities marketplace, with multiple device manufacturers and IT providers in play, could quite easily present cyber security issues that undermine these efforts. The NCSC’s principles are one of the most sophisticated pieces of government-led guidance published in Europe to date. “The guidance set out for connected places generally aligns to cyber security best practice for enterprise environments, but also accounts for the challenges of connecting up different systems within our national critical infrastructure. “With DCMS [the Department for Digital, Culture, Media and Sport] also planning to implement legislation around smart device security, this is indicative of a broader government strategy to level up IoT security across the board. “This will enable new initiatives in the field of connected places and smart cities to gather momentum across the UK – with cyber security baked into the design and build phase. As lockdown restrictions ease and people return to workplaces and town centres, they need assurance that their digital identities and data are protected as the world around becomes more connected.


What if the hybrid office isn’t real?

A shift to hybrid work means that people will be returning to the office both with varying frequencies and for a new set of reasons,” says Brian Stromquist, co-leader of the technology workplace team at the San Francisco–based architecture and design firm Gensler. “What people are missing right now are in-person collaborations and a sense of cultural connection, so the workplace of the future — one that supports hybrid work — will be weighted toward these functions.” Offices will need a way to preserve a level playing field for those working from home and those on-site. One option is to make all meetings “remote” if not everyone is physically in the same space. That’s a possibility Steve Hare, CEO of Sage Group, a large U.K. software company, suggested to strategy+business last year. According to Stromquist, maintaining the right dynamic will require investing in technologies that create and foster connections between all employees, regardless of physical location. “We’re looking at tools like virtual portals that allow remote participants to feel like they’re there in the room, privy to the interactions and side conversations that you’d experience if you were there in person,” he says.


Real-time data movement is no longer a “nice to have”

Applications and systems can “publish” events to the mesh, while others can “subscribe” whatever they are interested in, irrespective of where they are deployed in the factory or data centre, or the cloud. This is essential for critical industries we rely on, such as capital markets, industry 4.0, and a functional supply chain. Indeed, there are few industries today who can do without as-it-happens updates on their systems. Businesses and consumers demand extreme responsiveness as a key part of a good customer experience, and many technologies depend on real-time updates to changes in the system. However, many existing methods for ensuring absolute control and precision of such time-sensitive logistics don’t holistically operate in real-time, at scale, without data loss, and therefore open room for fatal error. From retail, which relies on the online store being in constant communication with the warehouse and the dispatching team, to aviation, where pilots depend on real-time weather updates in order to carry the passengers to safety, today’s industries cannot afford anything other than real-time data movement. Overall, when data is enabled to move in this way, businesses can make better decisions.


The Cloud Comes of Age Amid Unprecedented Change

Just look at how businesses compete. The influx of cloud technologies during the pandemic has underlined that the technology stack is a core mode of differentiation. Industry competition is now frequently a battle between technology stacks, and the decisions leaders make around their cloud foundation, cloud services and cloud-based AI and edge applications will define their success. Look at manufacturing, where companies are using predictive analytics and robotics to inch ever closer to delivering highly customized on-demand products. The pandemic has forced even the most complex supply chain operations from manufacturers to operate at the whim of changing government requirements, consumer needs and other uncontrollable factors, such as daily pandemic fluctuations. Pivot quickly and you’ll not only emerge as leaders of your industry, you may even gain immeasurable consumer intimacy. A true cloud transformation should start with a plan to shift significant capabilities to cloud. It is more than just migrating a few enterprise applications. Implementing a “cloud first” strategy requires companies to completely reinvent their business for cloud by reimagining their products or services, workforce, and customer experiences.



Quote for the day:

"Don't try to be the "next". Instead, try to be the other, the changer, the new." -- Seth Godin

Daily Tech Digest - May 09, 2021

10 Business Models That Reimagine The Value Creation Of AI And ML

Humanizing experiences (HX) are disrupting and driving the democratization and commoditization of AI. These more human experiences rely on immersive AI. By 2030, immersive AI has the potential to co-create innovative products and services navigating through adjacencies and double up the cash flow, opposed to a potential 20% decline in cash flow with nonadopters, according to McKinsey. GAFAM has been an influential force in pioneering and championing deep learning with its core business fabric. NATU and BAT have deeply embedded AI into their most profound route. Google’s Maps and Indoor Navigation, Google Translate and Tesla’s autonomous cars all exemplify immersive AI. Global AI marketplace is an innovative business model that provides a common marketplace for AI product vendors, AI studios and sector/service enterprises to offer their niche ML models through a multisided platform and a nonlinear commercial model. Think Google Play, Amazon or the Appstore. SingularityNet, Akira AI and Bonseyes are multisided marketplace examples. 


Self-Supervised Learning Vs Semi-Supervised Learning: How They Differ

In the case of supervised learning, the AI systems are fed with labelled data. But as we work with bigger models, it becomes difficult to label all the data. Additionally, there is just not enough labelled data for a few tasks, such as training translation systems for low-resource languages. In a 2020 AAAI conference, Facebook’s chief AI scientist Yann LeCun introduced self-supervised learning to overcome these challenges. This technique obtains a supervisory signal from the data by leveraging the underlying structure. The general method for self-supervised learning is to predict unobserved or hidden part of the input. For example, in NLP, the words of a line are predicted using the remaining words in the sentence. Since self-supervised learning uses the data structure to learn, it can use various supervisory signals across large datasets without relying on labels. A self-supervised learning system aims at creating a data-efficient artificial intelligent system. It is generally referred to as extension or even improvement over unsupervised learning methods. However, as opposed to unsupervised learning, self-supervised learning does not focus on clustering and grouping.


Thinking About Switching Career to Data Science? Pick the Right Strategy!

As trivial as it looks, the gigantic volume of blog posts, articles, books, videos, tutorials, talks, slides and presentations, online courses, … are in your service, most of them for FREE, to guide you in the direction you want to go. Use them and use them often! Use these resources to not only learn new skills but also to learn more about the differences between career paths in data science- from product analysts, business analysts, statisticians, …-, get a sense of the trends in data science and to figure out where you see yourself a fit! Read consistently: data science is a vast field and the more you read and learn, the more valuable you become for your future employer! Use your network to connect to data scientists and speak with them about their roles, experiences, projects, and a career path in analytics. Use your network to connect to the opportunities you may not be aware of! Let them know you want to transition to data science and you appreciate if they can help you along the way. Use your network to find roles with an overlap between your roles, responsibilites and skills and data science roles


Artificial Intelligence Is The Transformative Force In Healthcare

Artificial intelligence, the technology that is seen as a home name today is poised to become a transformational force in healthcare. Healthcare industry is where a lot of challenges are encountered and opportunities open up. Starting from chronic diseases and radiology to cancer and risk assessment, artificial intelligence has shown its power by deploying precise, efficient, and impactful interventions at exactly the right moment in a patient’s care. The complexity and rise of data in healthcare have unveiled several types of artificial intelligence. Today, artificial intelligence and robotics have evolved to the stage where they can take better care of patients better than medical staff and human caretakers. The global artificial intelligence in the healthcare market is expected to grow from US$4.9 billion in 2020 and reach US$45.2 billion by 2026 with a projected CAGR of 44.9% during the forecast period. Artificial intelligence and relevant technologies are prevalent in business and society and are rapidly moving into the healthcare sector.


Hadoop vs. Spark: Comparing the two big data frameworks

The fundamental architectural difference between Hadoop and Spark relates to how data is organized for processing. In Hadoop, all the data is split into blocks that are replicated across the disk drives of the various servers in a cluster, with HDFS providing high levels of redundancy and fault tolerance. Hadoop applications can then be run as a single job or a directed acyclic graph (DAG) that contains multiple jobs. In Hadoop 1.0, a centralized JobTracker service allocated MapReduce tasks across nodes that could run independently of each other, and a local TaskTracker service managed job execution by individual nodes. ... In Spark, data is accessed from external storage repositories, which could be HDFS, a cloud object store like Amazon Simple Storage Service or various databases and other data sources. While most processing is done in memory, the platform can also "spill" data to disk storage and process it there when data sets are too large to fit into the available memory. Spark can run on clusters managed by YARN, Mesos and Kubernetes or in a standalone mode. Similar to Hadoop, Spark's architecture has changed significantly from its original design. 


How retailers are embracing artificial intelligence

Personalized recommendation engines have been a mainstay of shopping for years. There’s a folk legend in data mining circles, which claims Target has such powerful data mining and analytics, it once recommended baby clothing to a girl before she knew she was pregnant. Sadly, it’s just a myth, dating from a hype-filled 2012 New York Times report. But while big data and AI use cases for online shopping are still largely based in centralized data centers, a growing number of use cases are seeing retailers embrace Edge computing and AI, both at the Edge and in the cloud. Fulfillment centers are increasingly being used to automate warehouses in order to speed up deliveries and optimize space, which can make supply chains and logistics more efficient. In-store, robots are being used to stack shelves and clean floors. Machine vision is being brought in to scan shelves and manage inventory, suggest fashion ideas to customers, and in the case of Amazon Go and other competitors, remove the need for cashiers and traditional checkouts.


Designing for Behavior Change by Stephen Wendel

Designing for behavior change doesn’t require a specific product development methodology—it is intended to layer on top of your existing approach, whether it is agile, lean, Stage-Gate, or anything else. But to make things concrete, Figure 4 shows how the four stages of designing for behavior change can be applied to a simple iterative development process. At HelloWallet, we use a combination of lean and agile methods, and this sample process is based on what we’ve found to work. The person doing the work of designing for behavior change could be any one of these people. At HelloWallet, we have a dedicated person with a social science background on the product team (that’s me). But this work can be, and often is, done wonderfully by UX folks. They are closest to the look and feel of the product, and have its success directly in their hands. Product owners and managers are also well positioned to seamlessly integrate the skills of designing for behavior change to make their products effective. Finally, there’s a new movement of behavioral social scientists into applied product development and consulting at organizations like ideas42 and IrrationalLabs. 


Cybersecurity has much to learn from industrial safety planning

A scenario-based analysis makes it easier to understand the risk, without a high degree of technical jargon or acumen. The longstanding practices of safety engineers can provide an excellent template for this kind of analysis. For instance, by performing a hazard and operability (HAZOP) analysis process that examines and manages risk as it relates to the design and operation of industrial systems. One common method for performing HAZOPs is a process hazards analysis (PHA) that uses specialized personnel to develop scenarios that would result in an unsafe or hazardous condition. It is not a risk reduction strategy that simply looks at individual controls, but considers more broadly how the system works in unison and the different scenarios that could impact it. Cybersecurity threats are the work of deliberate and thoughtful adversaries, whereas safety scenarios often result from human or system error and failures. As a result, a safety integrity level can be measured with some confidence by failure rates, such as one every 10 years or 100 years.


Geographic databases hold worlds of information

Microsoft’s SQL server can store two types of spatial data, the so-called geometry for two-dimensional environments and the geography for three-dimensional parts of the world. The elements can be built out of simpler points or lines or more complex curved sections. The company has also added a set of geographic data formats and indexing to its cloud-based Azure Cosmos DB NoSQL database. It is intended to simplify geographic analysis of your data set for tasks such as computing store performance by location. Noted for a strong lineage in geographic data processing, ESRI, the creator of ArcGIS, is also expanding to offer cloud services that will first store geographic information and then display it in any of the various formats the company pioneered. ESRI, traditionally a big supplier to government agencies, has developed sophisticated tools for rendering geographic data in a way that’s useful to fire departments, city planners, health departments, and others who want to visualize how a variety of data looks on a map. There is a rich collection of open source databases devoted to curating geographic information.


Internet of Trusted Things: Democratizing IoT

Right now, the Internet of Things is more dolphin than human. Connections are disparate and clunky, and connecting devices does not create automatic value like connecting people. Intelligence has to be connected for the conjoining to add value. But IoT is becoming more intelligent by the day. Edge computing—where Moore’s law empowers each IoT sensor with the computing power to make artificially intelligent decisions without relying on a central cloud hub—creates this intelligence. In the words of Stan Lee, with great power comes great responsibility. So we return to the question: Who controls IoT? In a world with 86 billion devices, each equipped with on-the-edge intelligence, the answer to this question concerns the future of humanity. IoT is notoriously fractured. Countless use cases require domain expertise. As a result, no analogous winner takes all to the internet where network effects anointed masters in search (Google) and social (Facebook). According to Statista, at the end of 2019, there were 620 IoT platforms, including tech behemoths Microsoft and Amazon. 



Quote for the day:

"Real leaders are ordinary people with extraordinary determinations." -- John Seaman Garns