Daily Tech Digest - April 14, 2021

How far have we come? The evolution of securing identities

With internal, enterprise-facing identity, these individuals work for your organization and are probably on the payroll. You can make them do things that you can’t ask customers to do. Universal 2nd Factor (U2F) is a great example. You can ship U2F to everyone in your organization because you’re paying them. Plus, you can train internal staff and bring them up to speed with how to use these technologies. We have a lot more control in the internal organization. Consumers are much harder. They are more likely to just jump ship if they don’t like something. Adoption rates of technologies, like multifactor authentication, are extremely low in consumer land because people don’t know what it is or the value proposition. We also see organizations reticent to push it. A few years ago, a client had a 1 percent adoption rate of two-factor authentication. I asked, “Why don’t you push it harder?” They said that every time they have more people using two-factor authentication, there are more people who get a new phone and don’t migrate their soft token or save the recovery codes. Then, they call them up and say, “I have my username or password but not my one-time password.


Informatica debuts its intelligent data management cloud

As data becomes more valuable, so does data management, Informatica argues. Its IDMC offers more than 200 intelligent cloud services, powered by Informatica's AI engine CLAIRE. It applies AI to metadata to give an organization an understanding of its "data estate," Ghai explained. The "data estate" tells you about the fragmentation of data -- its location and the various domains of data. "And through that insight," Ghai said, Informatica will "automate the ability to connect to data, to build data pipelines, process data, provision it for analytics... to apply advanced transformations to cleanse that data and trust it... to match, merge and build a single source of truth." From there, the platform aims to make data more accessible to business users with features like the "data marketplace." With the marketplace, users can "shop for data" much as one would shop for consumer goods on the Amazon marketplace, Ghai explained. The IDMC is micro-services based and API-driven, with elastic and serverless processing. It's built for hybrid and multi-cloud environments. The platform is already running at scale, processing more than 17 trillion transactions each month.


Microservices in the Cloud-Native Era

With developer tools and platforms like Docker, Kubernetes, AWS, GitHub, etc., software development has become very approachable and easy. You have a monolithic architecture and three million lines of code. Making changes to the code base whenever required and releasing new features was not an easy task before. It created a lot of dilemmas between the developer teams. Finding the mistake that was causing the code to break was a monumental task. That’s where microservices architecture shines. Many companies have recently moved from their humongous monolithic architecture to microservices architecture for a bright future. There are many advantages of shifting to microservices architecture. While a monolithic application puts all of its functionality into a single code base and scales by replicating on multiple servers, a microservices architecture breaks an application down into several smaller services. It then segments them by logical domains. Together, microservices communicate with one another over APIs to form what appears as a single application to end-users. The problem with a monolithic application, when something goes wrong, the operations team blames development, and development blames QA.


Modern Data Warehouse & Reverse ETL

“Reverse ETL” is the process of moving data from a modern data warehouse into third party systems to make the data operational. Traditionally data stored in a data warehouse is used for analytical workloads and business intelligence (i.e. identify long-term trends and influencing long-term strategy), but some companies are now recognizing that this data can be further utilized for operational analytics. Operational analytics helps with day-to-day decisions with the goal of improving the efficiency and effectiveness of an organization’s operations. In simpler terms, it’s putting a company’s data to work so everyone can make better and smarter decisions about the business. As examples, if your MDW ingested customer data which was then cleaned and mastered, that customer data can then by copied into multiple SaaS systems such as Salesforce to make sure there is a consistent view of the customer across all systems. Customer info can also be copied to a customer support system to provide better support to that customer by having more info about that person, or copied to a sales system to give the customer a better sales experience.


The Microsoft-Nuance Deal: A new push for voice technology?

Microsoft has had its hand in voice technology since debuting its virtual assistant Cortana in 2015 as part the initial Windows 10 release. Since then, Cortana has evolved to support Android and iOS devices, Xbox, the Edge browser, Windows Mixed Reality headsets, and third-party devices such as thermostats and smart speakers. According to Microsoft, Cortana is currently used by more than 150 million people. More recently, the company shifted Cortana to position it as more of an office assistant rather than for more general use. “Voice recognition is gaining momentum and will be used in every type of industry — from transcription to command-and-control types of applications — and acquiring a leading vendor in this area just makes sense,” Pleasant said. She stressed that as users become familiar with Cortana, Siri and Amazon's Alexa at home, they expect to see similar speech-enabled technologies at work. She also noted that Microsoft is one of the few companies with the resources to acquire a company like Nuance, allowing it to jump ahead of rivals who might have wanted to do the same thing.


Get your firm to say goodbye to password headaches

In a passwordless environment, no password storage or management is needed. Therefore, IT teams are no longer burdened by setting password policies, detecting leaks, resetting forgotten passwords and having to comply with password storage regulation. It’s fair to say that for many helpdesk teams, password reset requests will be the most commonly asked-for thing (from users). Past research has determined that for some larger organizations, up to $1 million per year can be spent on staffing and infrastructure to handle password resets alone. Resetting passwords is probably not a particularly complex issue for most IT departments to deal with, but it’s the sheer number of requests makes handling these requests an extremely time-consuming task. Just how much time does that take away from helpdesks on a daily, weekly or monthly basis? It’s one of those hidden costs that your firm will be incurring that can be streamlined by giving people passwordless connections into their environment. Passwords remain a weakness for those trying to secure customer and corporate data and passwords are the number one target of cyber criminals.


Modernising the insurance industry with a shared IT platform model

Pockets of the insurance industry are heading this way, by, for example, using vehicle trackers that reward good driving with lower premiums. But behind the scenes for many organisations is a mass of hugely complex products and equally unwieldy legacy systems that don’t provide them with the ability to work in a way that is agile and digital-first. Assess your systems as they stand today and you may find that several, or possibly hundreds, have been redundant for some time. Eliminating these systems, which are nothing more than drains on the business’ resources, will allow for a greater level of agility. By moving away from cumbersome legacy systems that are no longer fit for purpose, insurers can create a simplified system that unifies silos, making everyday work more efficient, and saves the business money. Money that they can reinvest into creating a customer-centric company that can rival its strongest competitors. Imagine a world where you could simplify your product range, providing cover for the highest number of people with the fewest number of insurance products.


5 Great Ways To Achieve Complete Automation With AI and ML

The self-healing technique in test automation solves major issues that involve test script maintenance where automation scripts break at every stage of change in object property, including name, ID, CSS, etc. This is where dynamic location strategy comes into the picture. Here, programs automatically detect these changes and fix them dynamically without human intervention. This changes the overall approach to test automation to a great extent as it allows teams to utilize the shift-left approach in agile testing methodology that makes the process more efficient with increased productivity and faster delivery. ... This self-healing technique saves a lot of time invested by developers in identifying the changes and updating them simultaneously in the UI. Mentioned below is the end-to-end process flow of the self-healing technique which is handled by artificial intelligence-based test platforms. As per this process flow, the moment an AI engine figures out that the project test may break because the object property has been changed, it extracts the entire DOM and studies the properties. It runs the test cases effortlessly without anyone getting to know that any such changes have been made using dynamic location strategy.


Apache Software Foundation retires slew of Hadoop-related projects

ASF's Vice President for Marketing & Publicity, Sally Khudairi, who responded by email, said "Apache Project activity ebbs and flows throughout its lifetime, depending on community participation." Khudairi added: "We've...had an uptick in reviewing and assessing the activity of several Apache Projects, from within the Project Management Committees (PMCs) to the Board, who vote on retiring the Project to the Attic." Khudairi also said that Hervé Boutemy, ASF's Vice President of the Apache Attic "has been super-efficient lately with 'spring cleaning' some of the loose ends with the dozen-plus Projects that have been preparing to retire over the past several months." Despite ASF's assertion that this big data clearance sale is simply a spike of otherwise routine project retirements, it's clear that things in big data land have changed. Hadoop has given way to Spark in open source analytics technology dominance, the senseless duplication of projects between Hortonworks and the old Cloudera has been halted, and the Darwinian natural selection process among those projects completed.


DNS Vulnerabilities Expose Millions of Internet-Connected Devices to Attack

In a new technical report, Forescout and JSOF describe the set of nine vulnerabilities they discovered as giving attackers a way to knock devices offline or to download malware on them in order to steal data and disrupt production systems in operational technology environments. Among the most affected are organizations in the healthcare and government sectors because of the widespread use of devices running the vulnerable DNS implementations in both environments, Forescout and JSOF say. According to the two companies, patches are available for the vulnerabilities in FreeBSD, Nucleus NET, and NetX. Device vendors using the vulnerable stacks should provide updates to customers. But because it may not always be possible to apply patches easily, organizations should consider mitigation measures, such as discovering and inventorying vulnerable systems, segmenting them, monitoring network traffic, and configuring systems to rely on internal DNS servers, they say. The two companies also released tools that other organizations can use to find and fix DNS implementation errors in their own products. 




Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne

Daily Tech Digest - April 13, 2021

19 Realistic Habits To Improve Software Development

When you finish writing a fragment of code and see that it works, take some time to reread it and see if you can improve it. Think that you are going to show it to someone else who is going to evaluate your code. Would you leave it the same? One of the best code refactoring techniques is the red/green process used in Agile test-driven development. To use this technique, your code must be covered with tests. If when refactoring, something fails, the test will not pass, and you will be aware that something is wrong with your refactor. ... Plan a time interval without distractions or interruptions. Interruptions will make your mind lose track of what it is developing, and you will have to start again when you resume the activity, which will cost you extra work time and make you more prone to make mistakes. It works to leave only the IDE open and a browser with a maximum of two tabs. ... Don’t try to write clever code that only you understand. Write code that someone else can read and understand. It doesn’t matter if your code has a few more lines if they’re necessary to make it understood better. Remember that in a few months, you or someone else on your team may have to modify the code, and if it is not easy to understand, it will not be easy to modify.


Clear & Present Danger: Data Hoarding Undermines Better Security

Even though there is overlap between the users of big companies' services and the customers of small businesses, the big companies aren't sharing their data. As a result, customers who use smaller businesses are left to fend for themselves. A few companies are trying to change that. Deduce (disclosure, another company I've consulted for) created a data collective through which companies can share information about user's security-related behavior and logins. In exchange for sharing data with the platform, companies get access to Deduce's repository of identity data from over 150,000 websites. They can use this shared data to better detect suspicious activity and alert their users, just like Microsoft and Google do using their own data. In a different approach to helping businesses identify suspicious users, LexisNexis created unique identifiers for their clients' customers. Using these identifiers, their clients can share trust scores that indicate if a particular user is suspicious. If a suspicious user attempts to log in to a website, the site can block that user to keep themselves and their legitimate users safer.


Optimizing the CIO and CFO Relationship“

CIOs are more likely to be pioneers and/or integrators, while CFOs are more likely to be guardians and drivers,” according to consultancy Deloitte in a description of different corporate personality types. “Pioneers are novelty-seeking, they like having a variety of possibilities, generating new ideas….On the other hand, the guardian personality values structure and loyalty, are much more methodical, detail-oriented, and perhaps a little more risk-averse.” ... CFOs understand that they have to change and expand their skills,” said Mastanuono. “The modern CFO understands technology and how it can transform the business. He or she also needs to understand the future of what finance will look like, and be a transformer of people, processes, and systems. The CFO must move from being a reactive to a proactive collaborator so the end business can be positioned to have the right systems and data at the right time. Breaking down silos and developing empathy and cross-functional collaboration are requirements, and the CFO-CIO relationship is a critical piece.” ... If CFOs and CIOs can develop a common approach to IT investments that looks at strategic risks as well as benefits, it creates common ground for project discussions and evaluations.


How to address post-pandemic infrastructure pain points

Managing workforce transformation is already challenging enough for employees who need to access on-premises resources. It becomes even more difficult if these employees work in regulated sectors, as medical and financial organizations need to track their employees’ identities, access requests, and usage to an even greater degree. Moreover, because there’s no one set of global standards, IT teams will need to account for many different compliance frameworks that vary based on where an employee is sitting, what information they’re accessing, and what sector they’re working in. On top of that, as businesses build new infrastructures that can accommodate and monitor permanently remote workers, they must be mindful of how certain regulations affect what personally identifiable information they can record about their own employees. GDPR, CCPA, and other privacy laws predate the pandemic, but like workforce transformation, they’ve become even starker and more commonplace challenges now. Different jurisdictions will have different mandates, and your IT teams will need to account for them all.


12 steps towards a secure project management framework

Cyber security is a tech-heavy domain, and project/program management is essential to deliver successful projects. However, cyber security requires a few tweaks in regular management practices as it comes with a different set of requirements. Cyber security is a security management program that is complex in nature and entails systematic processes. It deals with all aspects of a company’s operations, from mapping and recruiting skilled security professionals to vendor risk management. It involves protecting and securing computer systems, networks, and data from theft or damage, thereby ensuring business continuity. A project manager usually has to oversee many one-time and recurring cyber security tasks while handling usual responsibilities and priorities. A good project management framework will ensure that projects are delivered smoothly, without exceeding budgets, and are carried out in the timeframe decided. For any project management program to be successful, it’s important to define roles and responsibilities, a detailed plan of action, and milestones to be achieved.While most of the standard project management practices hold good in cyber security programs, there are a few cyber security-specific aspects that need to be taken care of with absolute diligence and strict adherence.


Information Relativity

Relativity was introduced at the beginning of the last century when Einstein proved that reality is fundamentally different depending on your frame of reference, a distortion of the spacetime continuum. The concept has led to the discovery of black holes, gravitational lenses, time dilation, and all kinds of other fantastic things. Relativity is not at all what one would expect based on our regular day-to-day lives that operate according to classic laws of physics. It changes what it means to observe and to be an observer—it means that how we experience the world differs not just in how we interpret it. There are circumstances where the world I experience is inconsistent with yours. It turns out that communication has these same circumstances that also work in this same peculiar way. Information is distorted depending on the location of the observer. Mark Burgess calls this “information relativity”: messages can take multiple paths and interfere with one another, information can be reversed in its order as it travels along one path, the speed of communication can be different from the speed of communication on another path. 


The Role of EiPaaS in Enterprise Architecture: Part 1

When discussing enterprise architecture, a diagram of the IT landscape comes to mind because that is the standard approach to defining an architecture. However, during our work with a number of enterprise architecture teams worldwide, we discovered that enterprise architecture has a larger strategic scope than what typical IT diagrams capture. Fundamentally, enterprise architecture converts business strategy into a value generation outcome by creating a foundation to execute various IT initiatives and processes. It is about gaining a long-term view for the organization, including the integration and standardization of various elements involved in the business. ... At the initial stages, an enterprise architecture will define the systems and subsystems required for each organization’s function. It starts with purchasing core systems, such as human resource management (HRM), customer relationship management (CRM) and/or enterprise resource planning (ERP) based on the business domain of the organization. In addition, subsystems will be built around the core systems by in-house or outsourced development teams. Systems and subsystems that belong to each function operate independently with limited or no information exchange.


Nvidia announces Morpheus, an AI-powered app framework for cybersecurity

Morpheus essentially enables compute nodes in networks to serve as cyberdefense sensors — Nvidia says its newly announced BlueField-3 data processing units can be specifically configured for this purpose. With Morpheus, organizations can analyze packets without information replication, leveraging real-time telemetry and policy enforcement, as well as data processing at the edge. Thanks to AI, Morpheus can ostensibly analyze more security data than conventional cybersecurity app frameworks without sacrificing cost or performance. Developers can create their own Morpheus skills using deep learning models, and Nvidia says “leading” hardware, software, and cybersecurity solutions providers are working to optimize and integrate datacenter security offerings with Morpheus, including Aria Cybersecurity Solutions, Cloudflare, F5, Fortinet, Guardicore Canonical, Red Hat, and VMware. Morpheus is also optimized to run on a number of Nvidia-certified systems from Atos, Dell, Gigabyte, H3C, HPE, Inspur, Lenovo, QCT, and Supermicro. Businesses are increasingly placing their faith in defensive AI like Morpheus to combat the growing number of cyberthreats.


Automation will accelerate decentralization and digital transformation

As the vaccinated population grows, doors reopen, and more people come together again, the reality we find ourselves in will not be the one left behind in 2019. Many long for a return to in-person experiences, but at the same time, have grown accustomed to the flexibilities of a decentralized, digital-first world. As we emerge from lockdown, hitting "rewind" will not satisfy customer and employee needs. Instead, companies must create hybrid experiences that integrate both digital and in-person modalities. In addition, the growing expectations of stakeholders has created unprecedented demand for IT innovation and greater sense of urgency in the post-pandemic world. Even as more offline activities resume, 2020's rapid digitalization will have a large and lasting impact on both customer and employee experiences. For example, analysis of global research from Salesforce shows customers anticipate engaging online with companies just as much in 2021 as they did in 2020. That customers expect to maintain this substantial departure from their 2019 patterns suggests that the swing to digital at the height of the pandemic wasn't purely due to unavailability of in-person channels.


How data poisoning attacks corrupt machine learning models

The main problem with data poisoning is that it's not easy to fix. Models are retrained with newly collected data at certain intervals, depending on their intended use and their owner's preference. Since poisoning usually happens over time, and over some number of training cycles, it can be hard to tell when prediction accuracy starts to shift. Reverting the poisoning effects would require a time-consuming historical analysis of inputs for the affected class to identify all the bad data samples and remove them. Then a version of the model from before the attack started would need to be retrained. When dealing with large quantities of data and a large number of attacks, however, retraining in such a way is simply not feasible and the models never get fixed, according to F-Secure's Patel. "There's this whole notion in academia right now that I think is really cool and not yet practical, but we'll get there, that's called machine unlearning," Hyrum Anderson, principal architect for Trustworthy Machine Learning at Microsoft, tells CSO. "For GPT-3 [a language prediction model developed by OpenAI], the cost was $16 million or something to train the model once.



Quote for the day:

"It's not about how smart you are--it's about capturing minds." -- Richie Norton

Daily Tech Digest - April 12, 2021

Coding interviews are terrible. Can we make them better?

A typical coding interview will involve presenting a candidate with a technical problem, which they'll have to solve in real time and in front of the interviewing panel. While these typically vary from one company to another, one common format is whiteboard coding, whereby a candidate might be asked to provide a solution to a problem involving a binary tree. It was a binary tree task that drew the ire of Howell in his now-famous tweet. These are a fairly typical part of technical interviews, designed to assess a candidate's ability to solve a programming problem and show their thinking 'out loud'. Still, most programmers say this isn't representative of anything they'd have to do in their day-to-day job, and say it's an outdated means of assessing candidates that doesn't reflect their skill level. "These little challenges don't show the greater skill sets, which for me are the ability to construct large programs," says Howell. "It's not about small algorithms. It's about the design of larger systems, and that's way more important." Howell also sees traditional coding interviews as being reflective of an industry that focuses too much on building at speed. "It's partly because the software industry moves so fast," he says.


How Augmented Reality Strengthens Biotech Manufacturing

Factories where engineers or scientists are using smart glasses to obtain virtual guidance, operators working with remote vendors to detect equipment failures in real-time, or interactive training sessions planned by directors located in another continent, are already here. “The barriers to adoption are decreasing as the AR industry becomes more robust,” notes Stracquatanio. Probably, the biggest advantage of AR is it enables seeing the production process virtually, without the need to be there. “It’s a game-changer for the industry. Individuals can have eyes and ears on site at a moment’s notice to address an emerging issue, or to host routine remote collaboration sessions,” Stracquatanio highlights. AR can also increase control over the manufacturing process. Pharma and biotech companies cannot afford mistakes during the production phase. A little oversight might lead to serious consequences such as having to start from scratch, which can be very expensive and time-consuming. A recent example is that of Johnson & Johnson’s manufacturing partner Emergent BioSolutions, whose workers erroneously mixed ingredients from two different Covid-19 vaccines; this led to wasting around 15 million vaccine doses.


Fileless Malware, Endpoint Attacks on the Rise

Cybercriminals are increasingly leveraging fileless malware, cryptominers and encrypted attacks, targeting users both at remote locations as well as corporate assets behind the traditional network perimeter. These were among the findings of WatchGuard Technologies’ Internet Security Report for Q4 2020, which found fileless malware and cryptominer attack rates grew by nearly 900% and 25%, respectively, while unique ransomware payloads plummeted by 48% in 2020 compared to 2019. The report also found botnet malware targeting IoT devices and routers became a top strain, among them the Linux.Generic virus (also known as “The Moon”), malware which is part of a network of servers that directly targets IoT devices and consumer-grade network devices, like routers, to exploit any open vulnerabilities. Total network attack detections grew by 5% in Q4, reaching their highest level in more than two years, while total unique network attack signatures showed steady growth as well, with a 4% increase compared with the third quarter of 2020. “We believe the increase in endpoint attacks between 2019 and 2020 is largely due to the widespread rise of remote work in response to the global pandemic,” Corey Nachreiner, WatchGuard CTO, explained.


Could social media networks pave the way towards stronger authentication?

Passwords are still the most common form of user authentication, “protecting” accounts, devices and systems, but alone, they don’t provide strong security. Not only that, they don’t offer the best user experience. Many passwords don’t even meet the minimum criteria of being unique and complex. People reuse passwords across accounts because they simply can’t keep track of all the logins they have. They choose passwords that are easy to remember to ease the burden, but that makes them easy to guess too. In fact, our research shows that people reuse their passwords across an average of ten personal accounts, while ‘123456’ still topped the list for the most common password in 2020. Even when they have chosen well, their unique and complex password can still fall victim to a modern phishing attack. After all, even an exemplary password can’t protect an account if the holder has been tricked into providing the information. From a user experience perspective, you have the stress and strain of choosing a unique, complex password each time that also meets the criteria demanded by the platform or service provider.


Nation-state cyber attacks double in three years

“Cyber crime economies are shaping the character of nation-state conflicts,” said McGuire. “There is also a ‘second generation’ of cyber weaponry in development that draws upon enhanced capabilities in computing power, AI [artificial intelligence] and cyber/physical integrations. One such example is ‘Boomerang’ malware, which is ‘captured’ malware that can be turned inward to operate against its owners. “Nation states are also developing weaponised chatbots to deliver more persuasive phishing messages, react to new events and send messages via social media sites. In the future, we can also expect to see the use of deepfakes on the digital battlefield, drone swarms capable of disrupting communications or engaging in surveillance, and quantum computing devices with the ability to break almost any encrypted system.” To ease rising tensions and prevent nation states from being drawn into more hostile cyber attacks, 70% of the expert panel said they thought some kind of international treaty would ultimately be necessary – this is by no means a new idea – but just 15% of them thought a cyber convention would be agreed on this decade, 37% said it was more likely to come in the 2030s, and 30% said it would probably never happen.


Quantum computer based on shuttling ions is built by Honeywell

Trapped-ion qubits were used to implement the first quantum logic gates in 1995, and the proposal for a quantum charged coupled device (QCCD) – a type of quantum computer with actions controlled by shuffling the ions around – was first made in 2002 by researchers led by David Wineland of the US National Institute of Standards and Technology, who went on to win the 2012 Nobel Prize for Physics for his work. Quantum gates have subsequently been demonstrated in multiple platforms, from Rydberg atoms to defects in diamond. The quantum computing technology first used by IT giants, however, was solid state qubits. In these, the qubits are superconducting circuits, which can be mounted directly on to a chip. These rapidly surpassed the benchmarks set by trapped ions, and are used in record-breaking machines from IBM and Google: “Working with trapped ions, I would be asked by people, ‘Why aren’t you working with superconducting qubits? Isn’t that race pretty much already settled?’,” says Winfried Hensinger of the UK’s University of Sussex. Recently, however, the progress made using superconducting circuits appears to be slowing as quantum computers integrate more and more qubits.


How MPC can solve blockchain’s data privacy conundrum

MPC, or multi-party computation, solves for confidentiality by utilizing a network of computation nodes that compute directly on encrypted data while maintaining zero knowledge about the data. For example, an employer may want to find out the average age of each of their employees. For privacy reasons, these employees may not be willing to share their ages, so through secret sharing, the employees can share their age without their age being publicly identifiable to them. The possibilities this technology enables are endless, and one must only think of the benefits such technology could bring to industries such as banking and insurance. While MPC solves for privacy, blockchain itself can protect the individual data against data breaches via the decentralization of sensitive information. Alone, blockchain lacks the infrastructure required to ensure data remains private. ... Not only is the pairing of MPC technology and blockchain a better solution to safeguarding consumer data to those currently in existence, it is one of the most viable solutions that effectively deals with the monumental problem of data security.


How Do Large Firms Train ML Models At Scale?

GPipe is a distributed machine learning library that uses synchronous stochastic gradient descent apart from pipeline parallelism to train any DNN containing multiple sequential layers. GPipe partitions a model across various accelerators and spins small batches of training examples to even smaller batches. Hence, GPipe’s accelerators can operate parallelly and maximise the scalability of the training process. It allows easy deployment of more accelerators to train large models and further scale the performance without tuning hyperparameters. GPipe is a distributed machine learning library that uses synchronous stochastic gradient descent apart from pipeline parallelism to train any DNN containing multiple sequential layers. GPipe partitions a model across various accelerators and spins small batches of training examples to even smaller batches. Hence, GPipe’s accelerators can operate parallelly and maximise the scalability of the training process. It allows easy deployment of more accelerators to train large models and further scale the performance without tuning hyperparameters.


Data validates future of work looks quite different than pre-pandemic

Both private and professional lives are slowly readopting former practices, such as eating inside a restaurant. As we cautiously return to normal, road warriors are ready to get back on the road, but we're also excited to keep some of the improved healthcare, restaurant and retail experiences we've discovered over the last year. Respondents cited the top four things they said they missed while working remotely: Spontaneous interactions with colleagues I wouldn't have talked to otherwise; Simply being around other people; Exposure to a diversity of perspectives and ideas; and Productivity. Qualtrics discovered that respondents found improved productivity (51%) and well-being—two times more likely than those who say it declined—during the pandemic lockdown. Managers concur: 55% said their direct reports have been more productive. Generationally, 54% of millennials said they're more productive, 53% of Gen Z, 48% of Gen X and 34% of boomers agree. Productivity has improved due to flexible schedules (31%), no commute (26%), more control over workspace (24%), ability to focus with fewer work interruptions (24%) and more privacy and personal space (23%).


The benefits of cyber threat intelligence

All of this saves time and helps them be more effective at mitigating threats and reducing risks. CTI allows the SOC to see beyond the perimeter, so they are aware of threats before they hit their infrastructure. That allows the SOC time to prepare, tweak defenses, such as deploying specific monitoring rules or knowing what to be on the lookout for. And when dealing with incidents or alerts, having this additional context allows them to place the individual alert, or maybe alerts they are dealing with, in the wider context of who is behind it, what their aims are, while typical next steps would be, or maybe even what must have gone before for this to occur. All of that makes it easier to determine how to respond. And when dealing with multiple alerts or incidents, as SOCs do, having this context allows you to prioritize, separating the wheat from the chaff as it were. And that’s critical as many SOCs are resource strained, and so knowing which items to focus on can help with making the most effective use of limited resources.



Quote for the day:

"It's good to trust others but, not to do so is much better." -- Benito Mussolini

Daily Tech Digest - April 11, 2021

One-stop machine learning platform turns health care data into insights

To turn reams of data into useful predictions, Cardea walks users through a pipeline, with choices and safeguards at each step. They are first greeted by a data assembler, which ingests the information they provide. Cardea is built to work with Fast Healthcare Interoperability Resources (FHIR), the current industry standard for electronic health care records. Hospitals vary in exactly how they use FHIR, so Cardea has been built to "adapt to different conditions and different datasets seamlessly," says Veeramachaneni. If there are discrepancies within the data, Cardea's data auditor points them out, so that they can be fixed or dismissed. Next, Cardea asks the user what they want to find out. Perhaps they would like to estimate how long a patient might stay in the hospital. Even seemingly small questions like this one are crucial when it comes to day-to-day hospital operations — especially now, as health care facilities manage their resources during the Covid-19 pandemic, says Alnegheimish. Users can choose between different models, and the software system then uses the dataset and models to learn patterns from previous patients, and to predict what could happen in this case, helping stakeholders plan ahead.


8 Ways Digital Banking Will Evolve Over the Next 5 Years

The initial shift toward digital financial services saw an ad hoc response from regulators. As new technologies come into play and tech giants like Google and Apple become increasingly disruptive in the financial industry, these transformations will force policymakers to identify emerging threat vectors and comprehensively address risk. In contrast to today’s mostly national systems of oversight, a global approach may be necessary to ensure stability in the sector, and we may see the rise of new licensing and supervisory bodies. The future of digital banking appears bright, but the unprecedented pace of innovation and shifts in consumer expectations demand a new level of agility and forward-thinking. Even as financial institutions attempt to differentiate themselves from competitors, co-innovation will become an integral part of success. People and technology will both play critical roles in these developments. Tech capabilities and digital services must be extremely resilient, constantly available at the time of customer need. Human capital, however, will be as crucial as any other asset. Leaders will have to know how to upskill, reskill and retain their talent to promote innovation. 


A new era of innovation: Moore’s Law is not dead and AI is ready to explode

We sometimes use artificial intelligence and machine intelligence interchangeably. This notion comes from our collaborations with author David Moschella. Interestingly, in his book “Seeing Digital,” Moschella says “there’s nothing artificial” about this: There’s nothing artificial about machine intelligence just like there’s nothing artificial about the strength of a tractor. It’s a nuance, but precise language can often bring clarity. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get “smarter” – make better models, for example, that can lead to augmented intelligence and better decisions by humans, or machines. These models improve as they get more data and iterate over time. Deep learning is a more advanced type of machine learning that uses more complex math. The right side of the chart above shows the two broad elements of AI. The point we want to make here is that much of the activity in AI today is focused on building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years.


Rethinking Ecommerce as Commerce at Home

Ecommerce is all grown up. It’s time to break away from the early-internet paradigm where online shopping was a new, “electronic” form of shopping. Today, almost all commerce involves varying degrees of digital elements (discovery, price comparison, personalization, selection, ordering, payment, delivery, etc.). The defining factor is not whether commerce is digital; rather, one defining factor is the optimal location for a retailer to meet a consumer’s needs. Shopping happens on a spectrum between home and the store. As such, ecommerce is better understood as commerce at home, and Amazon was the early winner. Great retailers focus on convenience or the experiential. In the new paradigm, certain retail truths persist. For example, all great retailers have focused primarily on either convenience retail or experiential retail. To be clear, any retail can be a great experience, but the priority matters. Amazon focuses ruthlessly on convenience. The outcome is a great customer experience. To drive growth, Amazon has prioritized speed and selection over consultation and curation. Amazon’s focus on convenience has yielded an (incredibly) high-volume, low-margin retail business.


These are the AI risks we should be focusing on

AI may never reach the nightmare sci-fi scenarios of Skynet or the Terminator, but that doesn’t mean we can shy away from facing the real social risks today’s AI poses. By working with stakeholder groups, researchers and industry leaders can establish procedures for identifying and mitigating potential risks without overly hampering innovation. After all, AI itself is neither inherently good nor bad. There are many real potential benefits that it can unlock for society — we just need to be thoughtful and responsible in how we develop and deploy it. For example, we should strive for greater diversity within the data science and AI professions, including taking steps to consult with domain experts from relevant fields like social science and economics when developing certain technologies. The potential risks of AI extend beyond the purely technical; so too must the efforts to mitigate those risks. We must also collaborate to establish norms and shared practices around AI like GPT-3 and deepfake models, such as standardized impact assessments or external review periods.


India Inc. must consider Digital Ethics framework for responsible digitalisation

An accelerated pace of digital transition, consumption of goods and services via app-based interface, and proliferation of data bring numerous risks such as biased decision-making processes being transferred to machines or algorithms at the development stage by humans, a Deloitte statement said on Friday. "These biases can be a threat to the reputation and trust towards stakeholders, as well as cause operational risks," it said. Partner, Deloitte India, Vishal Jain, said the pandemic compelled businesses and consumers to embrace digital technologies like artificial intelligence, big data, cloud, IoT and more in a big way. "However, the need of the hour is to relook at the business operations layered on digital touchpoints with the lens of ethics, given biases might arise in the due course, owing to a faster response time to an issue," he said. Societal pressure to do "the right thing" now needs a careful consideration of the trade-offs involved in the responsible usage of technology, Jain said, adding, its interplay becomes vital to managing data privacy rights while actively adopting customer analytics for personalised service.


How to Be a Better Leader By Building a Better Tribe

All of our journeys are exquisitely different, yet come with a unique set of challenges that can blur our leadership lens if not properly focused. This can become a snowball of personal detriment. Therefore, your mental, physical, and emotional health is just as important (if not more) than your professional and economic health—they are interrelated. Identify a therapist, wellness clinician, spiritual leader, life coach, physical trainer and/or anyone who can support your becoming an even greater version of yourself. Let's call this person the "healer". Make time for physical activity, healthy food choices and spending time with loved ones. Ensure the same investment you make in your team members, you also make in yourself. It is up to you to create your rituals for personal success. What will they entail? ... Similarly to curating a list of your tribal elders, remember that you are also an elder to a younger leader in your collective. We all were afforded a different set of societal privileges based on constructs of race/ethnicity, gender, sexual orientation, cognitive and physical abilities, etc. I think it’s important to utilize some of these privileges to be an ally/co-conspirator to someone who may not have the same position in society.


What is an enterprise architect? Everything you need to know about the role

The role of EA is closely connected to solutions architect, but tends to be broader in outlook. While EAs focus on the enterprise-level design of the entire IT environment, solution architects find spot solutions to specific business problems. EAs also work closely with business analysts, who analyse organisational processes, think about how technology might help, and then make sure tech requirements are implemented successfully. Looking upwards, EAs tend to work very closely with chief information officers (CIOs). While the CIO focuses on understanding the wider business strategy, the EA works to ensure that the technology that the organisation buys will help it to meet its business goals, whether that's improvements in productivity, gains in operational efficiency or developing fresh customer experiences, while also working with others – like the security team – to ensure everything remains secure. Nationwide CIO Gary Delooze is a former EA who says a really good enterprise architect will bring the business and IT teams together to create a technology roadmap.


How Blockchain Can Simplify Partnerships

To appreciate the ways in which blockchains can support complex collaborations, consider the task of shipping perishable goods across borders — a feat that requires effective coordination among suppliers, buyers, carriers, customs, and inspectors, among others. When the parties pass the cargo to another, a flood of information is transferred with it. Each party keeps their own record and tends to communicate with one partner at a time, which often leads to inconsistent knowledge across participants, shipping delays, and even counterfeit documentations or products. If, say, the buyer expects the goods to be constantly cooled throughout the shipping process and temperatures exceed agreed thresholds, a dispute is likely to occur among the buyer, the supplier, and the carrier, which can devolve into lengthy wrangling. The carrier may haggle over the liability to lower the compensation, arguing that customs delaying the transportation or the inspectors who improperly operated with the cargo are the ones to blame. The buyer will ask the supplier for remedy, who in turn needs to negotiate with the carrier. And so on. Problems like these can manifest in any collaboration that requires cumbersome information sharing among partners and may involve disputes in the process. 


Practical Points from the DGPO: An Introduction to Information Risk Management

Individuals are starting to pay attention to organizational vulnerabilities that compound risks associated with managing, protecting, and enabling access to information, ranging from poor data quality, insufficient methods of protecting against data breaches, inability to auditably demonstrate compliance with numerous laws and regulations, in addition to customer concerns about ethical and responsible corporate use of personal data. And as organizations expand their data management footprints across an increasingly complex hybrid multicloud environments, there has never been a greater need for systemic information risk management. ... In general, “risk” affects the way that a business operates in a number of ways. At the most fundamental level, it inhibits quality excellence. However, exposure to risks not only has an effect on project objectives, but it also poses threats of quantifiable damage, injury, loss, liability, or other negative occurrence that may be avoided through preemptive action. Using the Wikipedia definition as a start, we can define information risk as “the potential for loss of value due to issues associated with managing information.”



Quote for the day:

"The actions of a responsible executive are contagious." -- Joe D. Batton

Daily Tech Digest - April 10, 2021

15 Cybersecurity Pitfalls and Fixes for SMBs

We have obviously, the nation-state actor, which for a typical SMB would be kinda hard to protect against. Especially now as some evidence suggests that there were more than a thousand developers that contributed to the SolarWinds attack and so forth. And, I think that that might be something that’s not in the context of a typical SMB IT admin. But then you also have groups that are teenagers that are hacking around from Mom’s basement, right? You have those guys. You have legitimate criminal enterprises that are in a for-profit, that have balance sheets that have accountants that are actually doing things for profit and for their own revenue. And so when you look at the tools that are available to these organizations, if you look at the black market, and if you look at some of the things that are happening on the internet, you can actually buy toolkits for exploitation. You can buy toolkits that will allow some of these attacks to happen. And from the perspective of a malicious actor, the idea is not necessarily to target a specific business and to get their data. It’s kinda like fishing. You know, the larger net you cast, the more fish you’re going to catch. 


8 Security & Privacy Apps to Share With Family and Friends

Fifteen percent of consumers have left at least one online purchase process because of perceived security issues in the retail website, one report found last holiday season. Fourteen percent declined to purchase an item because of fears over how their data would be handled. And adoption of good security habits is on the uptick: Duo Lab's "2020 State of the Auth" report found more than half (53%) of respondents had used two-factor authentication (2FA), an increase from 28% two year prior. While most (71.5%) had experienced 2FA via SMS, more than one-third (36%) had used an authenticator app. We live in a time when most people spend hours a day on their mobile devices to do their jobs, keep in touch with friends and family, schedule appointments, handle their finances, and complete myriad other tasks. As smartphones handle more of our data, the need to secure them grows. There are several kinds of mobile apps to boost personal security and privacy, from password managers, to secure messaging apps, to anti-theft apps, and more. As a security pro, you may have your device locked down — but your family and friends may not know which steps they should be taking.


Threat matrix for storage services

Within cloud storage services, we witness users sharing various file types, such as Microsoft Office and Adobe files, and attackers taking advantage of this to deliver malware through email. Moreover, use cases of cloud storage go beyond internal interfaces, with business logic being shared with third parties. Therefore, the Azure Defender for Storage security team has mapped the attack surface undertaken by leveraging Storage service. This post reflects our findings based on the MITRE ATT&CK® framework, which is a knowledge base for tactics and techniques employed in cyberattacks. MITRE matrices have become an industry standard and are embraced by organizations aiming to understand potential attack vectors in their environments and to ensure they have adequate detections and mitigations in place. While analyzing the security landscape of storage, and applying the same methodology we defined for Kubernetes, we noticed the resemblance and differences across techniques. Whilst Kubernetes underlies an operating system, its threat matrix is structured like MITRE matrices for Linux or Windows.


Visa Describes New Skimming Attack Tactics

Visa's Payment Fraud Disruption team reports that cybercriminals are increasingly using web shells to establish command and control over retailers' servers during payment card skimming attacks. "As a result, eSkimming, or digital skimming, is among the top threats to the payments ecosystem," according to the Visa report. The web shells enable fraudsters conducting digital skimming attacks on e-commerce sites to establish and maintain access to compromised servers, deploy additional malicious files and payloads, facilitate lateral movement within a victim's network and remotely execute commands, Visa says. The most common methods for deploying a web shell are malicious application plug-ins and PHP code, Visa reports. Visa reached its conclusions after studying 45 digital skimming attacks in 2020. In February, Microsoft reported spotting 140,000 web shells per month on servers from August 2020 to January 2021, which it said is almost twice the number from the same period the year before. These web shells, however, were not being used for retail attacks. Visa notes attacks skimming payment card data from online checkout functions of e-commerce sites have become more prevalent during the COVID-19 pandemic as consumers have shifted to online shopping.


Dodge Adversarial AI Attacks Before It's Too Late!

In this tech-oriented world where a number of hackers and technological advancements are emerging in parallel to each other, artificial intelligence has made big strides recently in understanding languages. Contrary to this, artificial intelligence can still suffer from potentially dangerous and alarming sorts of algorithmic insight. Research depicts how AI algorithms that parse and analyze algorithms can be tricked and deceived by precisely crafted phrases. A sentence that might seem appropriate to you may have the strange ability to dodge the AI algorithm. It is estimated by the expert community that by the year 2040, artificial intelligence will reach the capability to perform all the intellectual functions of human beings. This might seem frightening but with the few techniques outlined in this teachable, you will radically grow your possibilities of survival when encountering artificial intelligence. Deceiving facial recognition features and tricking speech-recognition features is child’s play for hackers and emerging cybercriminals. Meanwhile, adversarial attacks invite more conceptual and deeper speculation.


Digital transformation: 5 trends that could shift your strategy

Application development, modernization, and integration are central practices in digital transformations that help organizations launch new business capabilities, improve customer experiences, and drive business process efficiencies. Until recently, CIOs and IT leaders considered implementations as a build-vs.-buy decision or used an RPA platform to automate workflows. Many invested in maturing agile and DevOps to continuously deliver cloud-native microservices and applications when building applications. Then COVID hit, and more IT leaders pursued low-code and no-code platforms to accelerate application development. Having multiple approaches to develop and support application development and integration is beneficial, but today, a growing number of options provide a complete hyperautomation platform. Hyperautomation app dev platforms have a mix of low-code, no-code, automation, and machine learning capabilities, provide out-of-the-box DevOps capabilities, and align the dev lifecycle to agile processes. Collectively, they can accelerate the development process and improve the productivity and quality of development efforts.


Using a schema registry to ensure data consistency between microservices

If Microservice A is holding data in a structure that is incongruent with Microservice B’s schema, some mapping will need to be done. There’s no magic. But, at the least, the developer writing the data exchange code will be aware of the conditions to satisfy because Microservice B’s data schema is well known. It’s not a question of reverse engineering some code in play and then having to figure out the mapping. Having the reliability provided by a single source of truth is a definite time-saver. Another area where a schema registry provides significant value is around validation. In the world of data management, there are few experiences more disappointing than writing a bunch of data validation code based on a given example, only to have the code become worthless because the underlying data schema you used was changed by a Data Architect somewhere upstream in the development process. Using a schema registry minimizes the problem. In some cases, using a schema registry makes the issue goes away altogether. The way it works is that when it comes time to validate some data, the developer will get the schema associated with the submitted data from the domain’s schema registry.


US Blacklists 7 Chinese Supercomputer Entities

Citing national security concerns, the U.S. Department of Commerce this week placed seven Chinese supercomputer organizations on the Entity List, which effectively bars them from receiving supplies or components from American companies. Commerce Secretary Gina M. Raimondo notes that the high-performance computing technologies developed by these entities could be used in weapons of mass destruction programs. "Supercomputing capabilities are vital for the development of many - perhaps almost all - modern weapons and national security systems, such as nuclear weapons and hypersonic weapons," Raimondo says. "The Department of Commerce will use the full extent of its authorities to prevent China from leveraging U.S. technologies to support these destabilizing military modernization efforts." Now that these organizations have been placed on the Entity List, the Commerce Department will require them to apply for a special license to do business with U.S. companies or receive supplies or components from American firms. The department's Bureau of Industry and Security must review and then approve or deny all license applications for organizations on the Entity List. 


Crossing the Line: When Cyberattacks Become Acts of War

The Cold War concept isn’t outdated. In the decades since the fall of the Soviet Union, the battleground has simply shifted from conflicts between ideological proxy governments to cyberspace. And the opponents have grown from a few primary nations into a broad range of sovereign threat actors. The question is, when does a cyberattack cross the line between a criminal action or mere prank, to an act of war? Is it the nature of the victim? The nature of the attacker? The nature of the damage? Or a combination of them all? To be sure, this is not a determination for cybersecurity professionals to make. Our role is to defend IT assets for our organizations by reducing risk, mitigating threats, remediating the situation after an attack, and generally trying to keep everything running safely and smoothly. It doesn’t matter whether we are facing a script kiddie trying to deface a website, a political hacktivist trying to make a statement, a cybercriminal trying to steal or ransom our data, or a state actor trying to steal confidential information. Our goal is to keep them out, and minimize the damage when they do manage to get in. The only thing that changes is how well-resourced and tenacious our opponents are.


4 Body Language Mistakes and How to Fix Them

When engaged in a difficult conversation, without empathizing with how the other person might be feeling in the moment, we may appear cold, unemotional, and downright rude. By adopting kindness and warmth in our body language, we can convey the right message without necessarily making them feel bad. When someone is passed up for a promotion, showing an attitude of indifference without understanding the value it holds in their life can make them resent you. Body language that shows presence and concern by giving them an opportunity to express their feelings can build better relationships. When a co-worker is grieving a personal loss, you may appear too intrusive in your body language when all they need is space to let the feelings subside. It could be a personal preference or a cultural nuance, but without understanding their context you may actually do more harm than good. When dealing with difficult people, your body language may switch to a fight-or-flight response. But, if you take a moment to analyze the situation without being at the effect of a fundamental attribution error, you may understand the rationale behind their behavior.



Quote for the day:

“Prove your integrity day-by-day, by keeping promises.” -- S. Chris Edmonds

Daily Tech Digest - April 09, 2021

Seeing, doing, and imagining

Association, which Pearl, a Turing Award winner, identifies as the first of three steps on his ladder of causation, won’t help executives answer many of the questions they need to ask when formulating corporate strategy, making investment decisions, or setting prices. To answer questions such as, “What will raising prices by 10 percent do to revenues?” you need to start climbing Pearl’s ladder. Intervention is the second step on the ladder. “Intervention ranks higher than association because it involves not just seeing but changing what is,” Pearl writes. That’s why companies are running scads of randomized controlled experiments these days. They are changing things on a small scale to figure out what effects an action will produce on a large scale. Real-world experiments aren’t a necessity — you can get a machine to figure out the effects of an intervention without actually changing anything in the real world. ... The third and highest rung on Pearl’s causation ladder is counterfactuals. Pursuing causation at this level means determining what would have happened if your company had done something in the past. For instance, what would revenues be today if you had cut prices by 10 percent a year ago?


The time is right for passwordless authentication

People just can’t be trusted to set reliable passwords, to change them frequently, to make sure they are strong, and to keep them secure. Forcing password change simply creates bad feeling and password reuse. Two-factor authentication is little better as a solution. It still relies on a password, often with a second PIN disclosed to a mobile phone. I’ve heard that some businesses and schools are trying to implement two-factor solutions, but users do not feel comfortable disclosing a private mobile number as a means to authenticate and log on, so the business needs to provide a second phone to the user, which is expensive and gives the user the task of carrying two phones around. Asking people to do more to achieve a goal than they were doing before is a sure-fire way to disgruntle them. Passwordless authentication removes all of these problems. It gives end-users less to remember, and less to think about. Login is faster, easier, and in comparison to tapping in passwords, waiting for a text to come through and tapping in a PIN, it is seamless and painless.


AI can stem the tide of increasing fraud and money laundering

Rather than having developers rewrite systems each time legislation changes, the new breed of AI-enabled RegTech can ‘learn’, interpret and comply with applicable laws, including KYC and AML. No system will ever be perfect – there is still the need for human oversight and there is still the possibility for criminals to find loopholes. These criminals are increasingly using technology to exploit weak links in regulatory frameworks, but as fast as they can move to deploy new schemes, machine learning systems will be able to counter them. AI-based technology has moved beyond an experimental phase and is ready to become a competitive differentiator in financial services, but there is still a level of reticence on the part of the industry when it comes to what many perceive as handing over compliance to machines. Traditionally, banks and other companies that handle monetary transactions have had to be conservative in nature. Data tends to be housed in silos, often on legacy systems, rather than having it be visible across the whole organisation, which allows AI-based systems to get the greatest value.


Root Cause Analysis for Data Engineers

In theory, root causing sounds as easy as running a few SQL queries to segment the data, but in practice, this process can be quite challenging. Incidents can manifest in non-obvious ways across an entire pipeline and impact multiple, sometimes hundreds, of tables. For instance, one common cause of data downtime is freshness — i.e. when data is unusually out-of-date. Such an incident can be a result of any number of causes, including a job stuck in a queue, a time out, a partner that did not deliver its dataset timely, an error, or an accidental scheduling change that removed jobs from your DAG. In my experience, I’ve found that most data problems can be attributed to one or more of these events: An unexpected change in the data feeding into the job, pipeline or system; A change in the logic (ETL, SQL, Spark jobs, etc.) transforming the data; An operational issue, such as runtime errors, permission issues, infrastructure failures, schedule changes, etc. Quickly pinpointing the issue at hand requires not just the proper tooling, but a holistic approach that takes into consideration how and why each of these three sources could break.


Gamifying machine learning for stronger security and AI models

Computer and network systems, of course, are significantly more complex than video games. While a video game typically has a handful of permitted actions at a time, there is a vast array of actions available when interacting with a computer and network system. For instance, the state of the network system can be gigantic and not readily and reliably retrievable, as opposed to the finite list of positions on a board game. Even with these challenges, however, OpenAI Gym provided a good framework for our research, leading to the development of CyberBattleSim. CyberBattleSim focuses on threat modeling the post-breach lateral movement stage of a cyberattack. The environment consists of a network of computer nodes. It is parameterized by a fixed network topology and a set of predefined vulnerabilities that an agent can exploit to laterally move through the network. The simulated attacker’s goal is to take ownership of some portion of the network by exploiting these planted vulnerabilities. While the simulated attacker moves through the network, a defender agent watches the network activity to detect the presence of the attacker and contain the attack.


Which Industries Would Benefit the Most From Agile Innovation

It may seem surprising that the financial sector is struggling to reach its innovation goals. However, Financier Worldwide found in 2015 that 90% of leaders admitted there was a lack of focus on radical innovation. Several years later, Deloitte’s report ‘Regulatory Trends Outlook for 2018’, claimed the financial industry was being hindered by a ‘legacy infrastructure’ that would take years to transform. For example, a focus on traditional product development means that customer and end-user feedback can’t be incorporated into the development process. Agile methods could rectify this by implementing new collaborative and customer-focused processes to product development. Teams could use a centralised system for the development of prototypes, which would be shared internally in a project’s initial phases. They can then conduct beta testing with a select group of end-users, with feedback incorporated iteratively into the final stages. Another issue is how increasingly stringent regulations may be inhibiting innovation. Financial firms are set to spend an estimated 10% of their revenue on compliance costs by 2022.


Why machine learning struggles with causality

Why do machine learning models fail at generalizing beyond their narrow domains and training data? “Machine learning often disregards information that animals use heavily: interventions in the world, domain shifts, temporal structure — by and large, we consider these factors a nuisance and try to engineer them away,” write the authors of the causal representation learning paper. “In accordance with this, the majority of current successes of machine learning boil down to large scale pattern recognition on suitably collected independent and identically distributed (i.i.d.) data.” i.i.d. is a term often used in machine learning. It supposes that random observations in a problem space are not dependent on each other and have a constant probability of occurring. The simplest example of i.i.d. is flipping a coin or tossing a die. The result of each new flip or toss is independent of previous ones, and the probability of each outcome remains constant. When it comes to more complicated areas such as computer vision, machine learning engineers try to turn the problem into an i.i.d. domain by training the model on very large corpora of examples.


WhoIAM: Enabling inclusive security through identity protection and fraud prevention

IT decision-makers are usually quite tuned in to the challenges around the cost of acquiring new customers, keeping user data secure, and managing infrastructure costs. However, large groups of users are often left behind because of an inherent set of biases in identity security. For instance, authenticator apps, while secure, require a reasonably tech-savvy user. On-device biometrics such as a fingerprint sensor or retina scan create a dependency on newer, more powerful hardware. SMS-based MFA, while more readily available, is expensive both to our client and their end customers and is considered less secure than other authentication factors. Even onscreen identity verification challenges tend to be biased towards English speakers who don’t have visual impairments. Asking a non-native speaker to solve a CAPTCHA that identifies all “sidewalks” or “stop lights” often does not translate well, and CAPTCHAs are historically a poor option for the visually impaired. While these are important factors to solve for, consumer brands still have to strike the right balance between security, cost, and usability.


Five ways to control spiralling IT costs after disruption

With an ongoing need to optimise costs, many businesses are suddenly realising they have lost control of their SaaS spend. It’s now common for large businesses to have SaaS applications managed outside the IT department, multiple contracts with the same vendor, or even multiple vendors providing the same service to the business. To combat this, first you need to draw on technology solutions that will give you full visibility of all SaaS application licences and services within the business. Then you need to rationalise them. With SaaS sprawl likely to be coming from outside IT, one way of consolidating this spend is to use tools that leverage single sign-on (SSO) data stored within an organisation’s network to identify hidden licences. Once you see the full picture, you can assess where best to cut back and which licences are redundant. Following on from this point, you need to introduce more accountability for SaaS usage and spend together with strict procurement processes and user chargeback. That’s because services like file storage and collaboration can be too easy to sign up for without the knowledge of IT.


Teknion CIO on the importance of fostering multi-generational talent

Technology has changed rapidly over the years, and each generation has come into technology at a different time. Their perspectives, therefore, are very different when it comes to technology, because it’s viewed from the moment they began leveraging it, as opposed to waiting for the latest technological innovation. It’s important to cultivate a multi-generational workforce in technology, especially, because everyone has different perspectives on the opportunities, challenges and shortfalls of tech. It’s a huge opportunity for every organisation to look at these perspectives and use technology in a better way because of that. ... As a CIO, my team and I have to provide the technology that keeps the company running, for our customers as well as the employees that work here every day. Having a perspective of a multi-generational workforce, not only within the technology department, but the wider business, allows us to enable digital transformation programs with more success. At the end of the day, we have to provide the technology, applications and tools that will help people to do their job better and not forcing them to work in a certain way.



Quote for the day:

"Failing organizations are usually over-managed and under-led." -- Warren G. Bennis