Daily Tech Digest - August 06, 2020

It’s time to think differently about how to develop cloud computing talent

“Certifications help set a benchmark for a conversation, but we tend to verify during interviews. Personally, I’m far more interested in curiosity, a desire to solve a problem, being self-starters — this talent goes much further as you develop it,” he adds. Sean Farrington, SVP EMEA at Pluralsight, also believes that developing and maintaining cloud computing skills once talent is in place is a challenge. Businesses “need the ability to accurately map skill levels and proficiencies within teams and put in place tailored learning pathways to address knowledge gaps,” he says. Success in this requires a reassessment of how learning is undertaken. Pluralsight, for example, found that 40% of IT professionals prefer learning online, either through self-paced or instructor led courses, rather than in classroom-based setups. Commenting on this, Farrington adds: “Companies are nothing more than the sum of their parts, and so business leaders must listen to the needs of their employees and implement an appropriate learning environment. In this case, the ability to upskill on demand and in bite-sized chunks is likely to keep cloud computing talent motivated, current and project-ready.”


Working With Intelligence: How AI Will Reshape Remote Work

HR managers and associates are required to undertake many tasks that allow them to comply with legal requirements for hiring as well as the policies issued by their respective companies. Finding the right candidate can be a time-consuming process when all of these compliances are taken into account. However, businesses can create remote positions that ease the load for managers or in-house employees. One of the criticisms about WFH surrounds a business’ ability to monitor the productivity and quality of output from external workers. Fortunately, artificial intelligence and machine learning are on hand to help out. Team leaders, supervisors and managers alike can turn to machine learning programs to monitor staff performance in a non-invasive and accurate manner. More modern systems are capable of utilising information through survey-based tools in order to provide impartial performance reviews and deliver accurate reports that indicate respective employee strengths and weaknesses on a case-by-case basis. Here, technology takes the lead and creates a level of analysis that’s difficult to replicate through human management. This is especially true for companies with a large number of employees that work from remote locations.


Overcoming the Evolving DevOps Skills Gap

It’s clear that in-demand skills don’t always remain in vogue for very long. To help limit the variability in expertise needed from year to year, companies should invest in tools that don’t constantly require learning new techniques to operate and that can automate tasks whenever possible. For example, a growing number of companies work with multiple cloud providers to ensure their applications and services are always available. While a multi-cloud strategy offers benefits, it also likely means running different projects on different providers’ clouds. To limit the amount of skills needed, companies can select container tools that deploy easily to multiple cloud environments without significantly affecting application topology.  Furthermore, tools that automate repetitive processes can help your company reconcile a skills gap. Leveraging solutions that automate processes tied to risk, compliance, and governance can help people focus on their core responsibilities and objectives rather than conducting manual data analyses or attempting to learn data-privacy law.  Thoughtfully employing technology can also help close skill gaps. With everyone now working remotely, there are fewer opportunities for in-person training and mentoring.


Why developers are falling in love with functional programming

A function with clearly declared in- and outputs is one without side effects. And a function without side effects is a pure function. A very simple definition of functional programming is this: writing a program only in pure functions. Pure functions never modify variables, but only create new ones as an output. (I cheated a bit in the example above: it goes along the lines of functional programming, but still uses a global list. You can find better examples, but it was about the basic principle here.) Moreover, you can expect a certain output from a pure function with a given input. In contrast, an impure function may depend on some global variable; so the same input variables may lead to different outputs if the global variable is different. The latter can make debugging and maintaining code a lot harder. There’s an easy rule to spot side effects: as every function must have some kind of in- and output, function declarations that go without any in- or output must be impure. These are the first declarations that you might want to change if you’re adopting functional programming.


IoT Automation Trend Rides Next Wave of Machine Learning, Big Data

Automation takes on a different aspect when IoT data is introduced, according to Susan Foss, product manager for real-time visualization and analytics at Esri, the geographic information system (GIS) giant. What is different? “It’s the nature of the data being collected,” she said. “Organizations have never had this type of information before or at this granularity of time-space detail.” “Before it was more periodic. Now they have it in the form of a living, breathing, constant supply,” she added. That ushers in event processing architectures, changes the pace with which teams have to work with data, and augers more automation. Foss said Esri is working with users to connect fast-arriving IoT data to location data. The goal is to create immediate visualizations of data on a map. This requires, Foss said, “a delicate balance of compute horsepower against the incoming real-time data, as well as static data sources that might need to be used with it.” And, real-time activity mapping is going indoors in the face of the COVID-19 pandemic. To that end, Esri recently updated its ArcGIS Indoors offering with new space planning templates. The software uses beacons and Wi-Fi to collect data for display on a live map showing activity in offices and other physical plants. Clearly, such capabilities have special import in the wake of coronavirus.


The Right Way of Tracing AWS Lambda Functions

This increased distribution and interdependency is precisely why distributed tracing has grown to be so important and valuable. Distributed tracing is a monitoring practice that involves your services to collectively and collaboratively recording spans that describe the actions they take in servicing one request. The spans related to the same request are grouped in a trace. In order to keep track of which trace is being recorded, each service must include the trace context in its own requests towards other upstream services. In a nutshell, you can think of distributed tracing as a relay race, the discipline of track and field sports in which the athletes take turns running and passing one another the baton. In the analogy of distributed tracing as a relay race, each service is an athlete and the trace context is the baton: if one of the services drops it, or the handoff between services is not successful because, for example, they implement different distributed tracing protocols, the trace is broken.Another similarity between distributed tracing and relay is that, while each of the single segments of the race matters and can make you lose the race, you need to be fast in each segment to excel.


Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create

At the bottom of the threat hierarchy, the researchers listed some "low-concern" applications – the petty crime of AI, if you may. On top of fake reviews or fake art, the report also mentions burglar bots, small devices that could sneak into homes through letterboxes or cat flaps to relay information to a third party. Burglar bots might sound creepy, but they could be easily defeated – in fact, they could pretty much be stopped by a letterbox cage – and they couldn't scale. As such, the researchers don't expect that they will cause huge trouble anytime soon. The real danger, according to the report, lies in criminal applications of AI that could be easily shared and repeated once they are developed. UCL's Matthew Caldwell, first author of the report, said: "Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime." The marketisation of AI-enabled crime, therefore, might be just around the corner. 


Organic data-transfer technology holds promise for IoT

Significantly, point-to-point links using devices made of organic matter could solve some sustainability issues, according to the U.K.'s Newcastle University. The tech industry has long wrestled with questions about how to encourage and make economical the recycling of hard-to-breakdown traditional electronics. LEDs are full of heavy metals, for example. Increasingly rapid lifecycle upgrades have exacerbated the challenges, and as IoT deployments expand, those questions could become even more pressing. OLEDs could be a solution, but data rates haven't been that great—they're not as powerful. At Newcastle University, researchers believe a new type of OLED could enable the faster data speeds required in a VLC-driven IoT communications network. Significantly, the OLED would be sustainable, since OLEDs are natural, organic and free of eco-unfriendly heavy metals. OLEDs have achieved around 10 Mbps speeds with add-on equalization algorithms and wavelength division multiplexing, whereas eco-unfriendly LEDs churn a healthy 35 Gbps. Equalization is a process where a specific band's energy is increased or decreased to level things out and improve data rates and bandwidth.


What Is Fintech And How Does It Affect How I Bank?

Fintech helps expedite processes that once took days, weeks or even months, like requesting a credit score report or sending an international money transfer. Platforms like Upstart and TransferWise accomplish these tasks in a fraction of the time as was the norm even five years ago. There’s been speculation about how fintech might help expedite traditionally red-tape-bound processes like distributing economic stimulus funds. Fintech also holds the potential to improve financial inclusion: In some parts of the world, fintech fills needs for the unbanked, where governmental or institutional support is lacking. Part of the reason fintech has the ability to streamline traditionally clunky processes is because it’s based in ones and zeros versus human skills and opinions. While many fintech platforms include elements of both traditional brokers/advisors and algorithms, others help users navigate financially complex tasks without interacting with a real, live human at all. For instance, today’s consumers can bypass traditional bank branches for things like applying for a loan (Lending Club) or even a mortgage (Better.com). Casual investors no longer need to meet face-to-face with financial experts to painstakingly go over the ins and outs of their portfolios—they can peruse their options online, or even enlist the help of chatbots to make decisions.


People: The one constant in an ever-evolving time of change

Despite the emphasis on speed, however, it is important that people remain a constant, central focus of the process. As such, a new, broadly applicable approach to change management is necessary to ensure clients, customers, and employees reach and maintain success. The creation of innovative solutions, made possible by tapping into lucrative fintech partnerships and digital initiatives, should be focused on building a strong, organizational culture that will effectively support people through these changes. As we put people first in the change model, some of our partners, and one regional northeast bank in particular, recently reinforced how thinking outside the box can pay it forward for change. The CEO utilized an innovative approach to leverage the role of bankers in the process. Rather than give the core responsibility to the digital and IT teams, he provided bankers and sales professionals a seat at the transformation table. By integrating front-end bankers into the core change management team, high performing bankers were able to think about front-end, client-facing concerns that other members of the team may not have experienced.



Quote for the day:

"Either write something worth reading or do something worth writing." -- Benjamin Franklin

Daily Tech Digest - August 05, 2020

Data privacy and data security are not the same

"Data privacy is, in essence, a subset of an organization's data security," Ewing said. "The distinction is important because, although the tools used to maintain data privacy and to ensure data security may overlap, the two are generally addressed differently by different teams using different tools." This overlap can cause confusion, leaving companies who focus just on data security with the false impression that, by default, data privacy also is protected. This is not the case. Unlike data security, which focuses on protecting all of an organization's data from theft or corruption (like during a ransomware attack), data privacy is more granular. To ensure data privacy, organizations must understand, track, and control things like who is authorized to access the data and where the data is stored -- in a Health Insurance Portability and Accountability Act (HIPAA)-compliant cloud, for example. A good example of differences between data privacy and data security was the harvesting of 87 million Facebook user profiles by the now-defunct political consulting firm Cambridge Analytica during the 2016-17 US presidential election, said Joshua Kail, a communications consultant who ran agency-side PR for Cambridge Analytica until it shut down in May 2018. 


State of the Art in Automated Machine Learning

Through the years of development of the machine learning domain, we have seen that a large number of tasks around data manipulation, feature engineering, feature selection, model evaluation, hyperparameter tuning can be defined as an optimization problem and, with enough computing power, efficiently automated. We can see numerous proofs for that not only in research but also in the software industry as platform offerings or open-source libraries. All these tools use predefined methods for data processing, model training, and evaluation. The creative approach to framing problems and applying new techniques to existing problems is the one that is not likely to be replicated by machine automation, due to a large number of possible permutations, complex context, and expertise the machine lacks. As an example, look at the design of neural net architectures and their applications, a problem where the search space is so ample, where the progress is still mostly human-driven. ... In theory, the entire ML process is computationally hard. From fitting data to, say, a neural network, to hyperparameter selection, to neural architecture search (NAS), these are all hard problems in the general case. However, all of these components have been automated with varying degrees of success for specific problems thanks to a combination of algorithmic advances, computational power, and patience.


How AI is Becoming Essential to Cyber-Strategy

The problem with machine learning is that the AI is limited to the features that it has been taught to expect. Fooling a machine learning security system is as simple as adding an unexpected/ unprogrammed feature into the exploit. Imagine a card trick such as “find the lady” where the machine learning software is expecting the dealer to operate inside the given parameters (the dealer is only moving around these three cards), but the dealer is cheating by having a fourth card. Because the concept of the fourth card is outside the expected features, the program can be defeated. What artificial neural networks can do is allow an AI to self-determine what features it uses to reach a conclusion. An artificial neural network still requires some degree of human input to confirm if a conclusion is incorrect, but it effectively self-organizes how it reviews and manages the data it has access to. As an example, an AI looking for new types of viruses can sense everything happening in a computer and then identify based on everything whether a program or even an activity in the memory are doing something unwelcome. It does not need to have seen the behavior before, it only has to recognize the outcome, or potential outcome.


ICML 2020 highlights: A Transformer-based RL agent, causal ML for increased privacy, and more

Microsoft researchers are in full summer swing when it comes to advancing machine learning in accessibility, privacy, healthcare, and other areas. As Microsoft Partner Research Manager and ICML President John Langford puts it, “ICML is a very broad conference, so its specialty is in some sense ‘all of the above.’” But Langford goes on to add that one of the topics that ICML has a long track record on is currently trending: reinforcement learning. A brief glance through the sessions and workshops presented by Microsoft researchers shows the wide influence reinforcement learning has in our world today, from natural language to robotics to infrastructure considerations like transportation. Beyond the research contributions, Microsoft was also a sponsor of and recruiter at the conference. Additionally, the company sponsored two events co-located with the conference, the first Women in Machine Learning Un-Workshop and the fourth Queer in AI Workshop. The impact of the conference—now and in the future—is multifaceted, according to Langford. “ICML is ‘the’ summer machine learning conference. As such, it’s critically important to the academic discovery, review, and dissemination process, a great way to meet fellow researchers, and a natural recruiting point for the field,” he says.


An open source solution for continuous testing at scale

With recent and ongoing updates, organizations can leverage Cerberus' features from development to operations. It expands digital experience test coverage by executing tests on a variety of browsers, devices, and apps. Its native connectors for APIs (including SOAP and REST), desktop applications, and Apache Kafka enable testing legacy apps, APIs, event-driven microservices, streaming services, business intelligence, data science applications, and other use cases. During the software development lifecycle, Cerberus supports fast iterations in test management, execution, and reporting. Users can create test specifications in plain English, compose tests using a library, execute in parallel on various devices, and do advanced reporting. Native integration with CI/CD solutions, such as Jenkins, Bitbucket, and others, combined with one-click ticket creation in Jira and other tools, makes bug resolution faster and easier. Cerberus can also monitor customer experience and business operations. Tests can be functional and technical, allowing organizations to test complex scenarios. For example, France's leading TV channel, TF1, uses it for quality assurance on its streaming platform.


Retrospectives for Management Teams

Good action points are the ones that propel the team forward and make them productive; I focus on quantity, quality, and the process itself. When it comes to quantity, it’s always wise to limit our commitments in order to maximize our chance of delivering them on time. Sometimes it aches the team to let go of some great ideas and not turn them into action points after a meeting. I believe it’s our duty as facilitators to increase the likelihood of a positive impact, even if it means cutting the number of initiatives we start simultaneously. When it comes to quality, in Radical Candor Kim Scott gives an easy-to-remember recipe for action points. You need to have a one-line answer to who will do what by when? If you do not have an answer on all three aspects, you don’t have an action point after all. If you follow her lead, you get a statement that is easy to act upon, easy to check if it’s being done, and easy to communicate with your stakeholders. Regarding the process, I like to encourage people to write their action items themselves - it helps to frame them in a way they understand and find easy to act upon. It helps to remember them, too.


What is an IT director? Everything you need to know about one of the top jobs in tech

The first IT professionals were employed to help their organisations manage mainframe systems. As computers became more integral to the way we work, so technology leaders – be they IT directors or CIOs – started to be appointed. IT director was the more commonly used term initially. Through the late 1990s and into the new millennium, it became customary for the top executive in a business to take the CIO moniker. While that's still often the case, it's not a hard and fast rule – many organisations still use the IT director title to describe their most senior tech executive, or use closely related titles, such as head of IT, head of technology, vice president of IT, or VP of technology. Apart from the job title, the roles are perceived to have a subtly different focus. Many big organisations now employ a CIO and an IT director. Where both executives are in situ, a split in responsibilities is likely to occur. IT directors are more likely to ensure day-to-day technology operations meet the mark, covering areas such as system uptime, service maintenance and vendor agreements. CIOs, on the other hand, are seen as the outward face of the technology department – CIOs spend less time in the data centre and more time engaging with their business peers in an attempt to understand how technology can be used to help meet their demands.


The Age of Accelerating Strategy Breakthroughs

Leading companies are also prioritizing the need to identify threats and opportunities created by megatrends that can rapidly reshape businesses. The coronavirus pandemic has shown that negative megatrends like epidemics and climate change can no longer be treated as tail risks so extreme that no preparation would make a difference. Companies have to build up resilience to safeguard profits by being prepared to play ferocious defense against other negative megatrends gathering momentum, like public debt crises, at one end of the spectrum. At the other, they must aggressively pursue new prospects created by positive megatrends like digitalization and health and wellness. Macro shifts set off by the pandemic illustrate how quickly megatrends can force companies to reset strategies. Retailers are rerouting investments earmarked for building physical locations into upgrading online commerce features and delivery services. Financial services companies are accelerating many more digital-only offerings, such as contactless payments and risk management products such as health insurance.


How to avoid cloud vendor lock-in and take advantage of multi-vendor sourcing options

Businesses recognise the benefit of utilising different suppliers and over half are now using more than one public cloud provider. According to McQuire, the moves of major cloud providers is reflecting this trend, with the launch of products like Google Cloud’s Anthos and Big Query Omni, as well as Microsoft’s Azure Arc. “Customers and developers want depth in cloud services but don’t want to be locked into a single cloud environment. Above all, they want choice when it comes to spinning up infrastructure for new applications, lift-and shift projects or maintaining consistency across their on premises, public cloud and edge environments,” he comments. McQuire warns, however, that while the market is still very early in its transition to the cloud, “care must be taken in pursuing multi-cloud approaches, so that they are not adding even more complexity to an already highly-complicated cloud computing stack. “Whilst consistency is key in multi-cloud, there will be those that do not want a lowest common denominator approach in order to support this strategy.”


How Ransomware Threats Are Evolving & How to Spot Them

"The cleverness, the creativity, and the intimate knowledge of these very, very miniscule technical details to craft a bypass like that is almost unseen in criminal malware," says Wisniewski. "It's the kind of thing we expect to see in espionage-style attacks, not in criminal attacks." Some attackers bypass technical tools by "living off the land," or using legitimate admin tools to achieve goals. Some use software deployment tools to roll out ransomware instead of delivering patches to Windows machines, Wisniewski says as an example. They may abuse PowerShell, other Microsoft tools, or so-called "gray hat" tools like Metasploit or Cobalt Strike. This behavior isn't new, Wisniewski says. "What is new is that may be the only indication you're going to get that they're in your network." Organizations may notice small, unusual things once in a while, remedy them, and close the ticket without realizing they're part of a larger incident. By the time they do, an attacker has been in their network for weeks. WastedLocker and Maze will "sit there for a month" to figure out the thing that will shut down their enterprise victim.



Quote for the day:

"Entrepreneurs must be willing to be misunderstood for long periods of time." -- Jeff Bezos

Daily Tech Digest - August 04, 2020

Ethical AI in healthcare

In many ways these technologies are going to be shaping us even before we've answered this question. We'll wake up one morning and realize that we have been shaped. But maybe there is an opportunity for each of us, in our own settings and in conversations with our colleagues and at the dinner table, and with society, more broadly, to ask the question, What are we really working toward? What would we be willing to give up in order to realize the benefits? And can we build some consensus around that?  How can we, on the one hand, take advantage of the benefits of AI-enabled technologies and on the other, ensure that we're continuing to care? What would that world look like? How can we maintain the reason why we came into medicine in the first place, because we care about people, how can we ensure that we don't inadvertently lose that?  The optimistic view is that, by virtue of freeing up time by moving some tasks off of clinicians’ desks, and moving the clinician away from the screen, maybe we can create space, and sustain space for caring. The hope that is often articulated is that AI will free up time, potentially, for what really matters most. That's the aspiration. But the question we need to ask ourselves is, What would be the enabling conditions for that to be realized?


Apache Cassandra’s road to the cloud

What makes the goal of open sourcing cloud-native extensions to Cassandra is emergence of Kubernetes and related technologies. The fact that all of these technologies are open source and that Kubernetes has become the de facto standard for container orchestration has made it thinkable for herds of cats to converge, at least around a common API. And enterprises embracing the cloud has created demand for something to happen, now. A cloud-native special interest group has formed within the Apache Cassandra community and is still at the early stages of scoping out the task; this is not part of the official Apache project. at least yet. Of course, the Apache Cassandra community had to get its own house in order first. As Steven J. Vaughan-Nichols recounted in his exhaustive post, Apache Cassandra 4.0 is quite definitive, not only in its feature-completeness, but also in the thoroughness with which it has fleshed out the bugs to make it production-ready. Unlike previous dot zero versions, when Cassandra 4.0 goes GA, it will be production-ready. The 4.0 release hardens the platform with faster data streaming, not only to boost replication performance between clusters, but make failover more robust. But 4.0 stopped short about anything to do with Kubernetes.


From doorbells to nuclear reactors: why focus on IoT security

An important step in network security for IoT is identifying the company’s most essential activities and putting protections around them. For manufacturing companies, the production line is the key process. Essential machinery must be segmented from other parts of the company’s internet network such as marketing, sales and accounting. For most companies, just five to 10% of operations are critical. Segmenting these assets is vital for protecting strategic operations from attacks. One of the greatest risks of the connected world is that something quite trivial, such as a cheap IoT sensor embedded in a doorbell or a fish tank, could end up having a huge impact on a business if it gets into the wrong communication flow and becomes an entry point for a cyber attack. To address these risks, segmentation should be at the heart of every company’s connected strategy. That means defining the purpose of every device and object linked to a network and setting boundaries, so it only connects to parts of the network that help it serve that purpose. With 5G, a system known as Network Slicing helps create segmentation. Network Slicing separates mobile data into different streams. Each stream is isolated from the next, so watching video could occur on a separate stream to a voice connection.


The ABCs of Data Science Algorithms

An organization’s raw data is the cornerstone of any data science strategy. Companies who have previously invested in big data often benefit from a more flexible cloud or hybrid IT infrastructure that is ready to deliver on the promise of predictive models for better decision making. Big data is the invaluable foundation of a truly data-driven enterprise. In order to deploy AI solutions, companies should consider building a data lake -- a centralized repository that allows a business to store structured and unstructured data on a large scale -- before embarking on a digital transformation roadmap. To understand the fundamental importance of a solid infrastructure, let’s compare data to oil. In this scenario, data science serves as the refinery that turns raw data into valuable information for business. Other technologies -- business intelligence dashboards and reporting tools -- benefit from big data, but data science is the key to unleashing its true value. AI and machine learning algorithms reveal correlations and dependencies in business processes that would otherwise remain hidden in the organization’s collection of raw data. Ultimately, this actionable insight is like refined oil: It is the fuel that drives innovation, optimizing resources to make the business more efficient and profitable.


Soon, your brain will be connected to a computer. Can we stop hackers breaking in?

Some of the potential threats to BCIs will be carry-overs from other tech systems. Malware could cause problems with acquiring data from the brain, as well as sending signals from the device back to the cortex, either by altering or exfiltrating the data. Man-in-the-middle attacks could also be recast for BCIs: attackers could either intercept the data being gathered from the headset and replace it with their own, or intercept the data being used to stimulate the user's brain and replace it with an alternative. Hackers could use methods like these to get BCI users to inadvertently give up sensitive information, or gather enough data to mimic the neural activity needed to log into work or personal accounts. Other threats to BCI security will be unique to brain-computer interfaces. Researchers have identified malicious external stimuli as one of the most potentially damaging attacks that could be used on BCIs: feeding in specially crafted stimuli to affect either the users or the BCI itself to try to get out certain information, showing users images to gather their reactions to them, for example. Other similar attacks could be carried out to hijack users' BCI systems, by feeding in fake versions of the neural inputs causing them to take unintended actions – potentially turning BCIs into bots, for example.


SaaS : The Dirty Secret No Tech Company Talks About

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


Digital Strategy In A Time Of Crisis

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


Why ISO 56000 Innovation Management matters to CIOs

The ISO 56000 series presents a new framework for innovation, laying out the fundamentals, structures and support that ISO leaders say is needed within an enterprise to create and sustain innovation. More specifically, the series provides guidance for organizations to understand and respond to changing conditions, to pursue new opportunities and to apply the knowledge and creativity of people within the organization and in collaboration with external interested parties, said Alice de Casanove, chairwoman of the ISO 56000 standard series and innovation director at Airbus. ISO, which started work on these standards in 2013, started publishing its guidelines last year. The ISO 56002 guide for Innovation management system and ISO 56003 Tools and methods for innovation partnership were published in 2019. ISO released its Innovation management -- Fundamentals and vocabulary in February 2020. Four additional parts of the series are forthcoming. The committee developed the innovation standards so that they'd be applicable to organizations of all types and sizes, de Casanove said. "All leaders want to move from serendipity to a structured approach to innovation management," she explained


How plans to automate coding could mean big changes ahead

Known as a "code similarity system", the principle that underpins MISIM is not new: technologies that try to determine whether a piece of code is similar to another one already exist, and are widely used by developers to gain insights from other existing programs. Facebook, for instance, uses a code recommendation system called Aroma, which, much like auto-text, recommends extensions for a snippet of code already written by engineers – based on the assumption that programmers often write code that is similar to that which has already been written. But most existing systems focus on how code is written in order to establish similarities with other programs. MISIM, on the other hand, looks at what a snippet of code intends to do, regardless of the way it is designed. This means that even if different languages, data structures and algorithms are used to perform the same computation, MISIM can still establish similarity. The tool uses a new technology called context-aware semantic structure (CASS), which lets MISIM interpret code at a higher level – not just a program's structure, but also its intent. When it is presented with code, the algorithm translates it in a form that represents what the software does, rather than how it is written; MISIM then compares the outcome it has found for the code to that of millions of other programs taken from online repositories.


RPA bots: Messy tech that might upend the software business

Where it gets interesting is that these RPA bots are basically building the infrastructure for all the other pieces to fit together such as AI, CRM, ERP and even documents. They believe in the long-heralded walled-garden approach in which enterprises choose one best-of-breed infrastructure platform like Salesforce, SAP or Oracle and build everything on top of that. History has shown that messy sometimes makes more sense. The internet did not develop from something clean and organized -- it flourished on top of TCP: a messy, inefficient and bloated protocol. Indeed, back in the early days of the internet, telecom engineers were working on an organized protocol stack called open systems interconnection that was engineered to be highly efficient. But then TCP came along as the inelegant alternative that happened to work and, more important, made it possible to add new devices that no one had planned on in the beginning. Automation Anywhere's CTO Prince Kohli said other kinds of messy technologies have followed the same path. After TCP, HTTP came along to provide a lingua franca for building web pages. Then, web developers started using it to connect applications using JavaScript object notation. 



Quote for the day:

"You have to have your heart in the business and the business in your heart." -- An Wang