Daily Tech Digest - April 25, 2021

Solving the security challenges of public cloud

Compounding matters is the lack of a unified framework for dealing with public cloud security. End users and cloud consumers are forced to deal with increased spend on security infrastructure such as SIEMs, SOAR, security data lakes, tools, maintenance and staff — if they can find them — to operate with an “adequate” security posture. Public cloud isn’t going away, and neither is the increase in data and security concerns. But enterprise leaders shouldn’t have to continue scrambling to solve these problems. We live in a highly standardized world. Standard operating processes exist for the simplest of tasks, such as elementary school student drop-offs and checking out a company car. But why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society? The ONUG Collaborative had the same question. Security leaders from organizations such as FedEx, Raytheon Technologies, Fidelity, Cigna, Goldman Sachs and others came together to establish the Cloud Security Notification Framework. The goal is to create consistency in how cloud providers report security events, alerts and alarms, so end users receive improved visibility and governance of their data.


Building A Global AI Brand With End-To-End Data Science & Engineering Solutions

At present, people think more from the model design perspective when speaking about data science and analytics than about data engineering. Going forward, model design will not matter much because most algorithms will be available as APIs. In fact, companies like Tredence are building algorithms that have a high degree of verticalization across industries and can be made available as APIs. AI as API is a good differentiation. It allows data scientists to spend less time building algorithms from scratch. Having said that, readily available algorithms can offer only up to 90% accuracy. The true test of a data scientist would be whether he/she can take the accuracy from 90 to 99%. It requires domain expertise, analytical thinking, and the ability to identify edge use cases. Working around biases and long-tail use cases of AI systems would also become very important. While designing algorithms, data scientists often assume that the end-user is AI and not human. There is a need for humanising these systems. Design thinking has seeped into how software is built, next up it should enter AI algorithms.


How to Keep an Innovative Mindset Present In Your Business

It’s essential that managers and executives take accountability and engage employees in more creative ways and foster innovative mindsets. You can do so in many ways, starting with rewarding innovative progress and changing company dynamics. First, you can offer bonuses or other incentives to employees who come up with new ideas for the company. These innovations don’t need to be fully formed or implemented right away. However, this kind of reward system encourages more of the same behavior. Employees will seek to create and flourish once they have the time, resources, and motivation. Then, you can change the way employees interact with the business itself. In traditional models, shareholders or executives own the business. In newer, more innovative dynamics, though, employees can now own parts of the company as well through shares that accrue over time. ... Technology is one of the best signs of innovation. It combines practicality, accessibility, and functionality, which helps it constantly evolve. Something like a smartphone builds on countless previous innovations and uses them to keep creating. The same concept applies to the workplace.


RBI to issue cybersecurity norms for payment services

While the standards for fintech-driven payment services providers will be similar to cyber hygiene norms issued recently for banks and non-banking finance companies, the RBI is quite clear that firms will have to do more than observe the minimum standards to ensure safety as digital transactions gain further traction. “On cyber frauds, Reserve Bank of India has issued very recently basic guidelines on cyber hygiene and cybersecurity for banks and certain NBFCs,” said RBI executive director T. Rabi Sankar. “We would follow that up with respect to other entities such as payments systems operators in the payments space. Those are getting finalised and will be issued soon,” he added. “Having said that, the minimum standards set by the regulator for the regulated entities are needed, but they would never be enough. As digitisation increases in any sphere, payments or otherwise, as people do more and more digital transactions, institutions themselves will have to do more than the minimum standards that regulators set, to deal with any cybersecurity threats,” he said, adding that individual users would also need to be alert as there is no alternative to being aware of the risks in undertaking digital transactions.


The differences between data analytics, machine learning and AI

So, we have three distinct areas of expertise we’ve outlined there. Each has its own applications, subsets, and specialisations, making them very different fields. However, as you may have noticed already, there are certainly some areas where they overlap. Below, we’ve outlined just some of the ways in which machine learning, data analytics, and AI overlap: Data-driven. Each of these areas relies on analysing huge amounts of data. The more information available, the more effective they are at producing results. It often takes a lot of computer processing power to manage such large data sets; Insights. Data analytics, AI, and machine learning can all be used to produce detailed insights in particular areas. By examining data, each can identify patterns, highlight trends, and provide valuable and actionable outcomes; Predictive models. These technologies can also help to create forecasts and predictions based on existing data. Again, this process can help organisations of all kinds plan for the future and make informed decisions. Of course, many other areas relate closely to those of AI, ML, and data analytics.


GoodData unveils analytics as a set of microservices in data-as-a-service platform

The ability to deploy GoodData.CN anywhere is crucial because multiple centers of data gravity will always exist in the enterprise, noted Stanek. It’s unlikely any major enterprise is ever going to be able to standardize on a single data warehouse or data lake, he said. The GoodData.CM platform provides all the metadata capabilities required to maintain a single source of truth across what are rapidly becoming highly federated environments, noted Stanek. A programmable API also makes it feasible to deploy a headless data-as-a-service platform for processing analytics that can be readily accessed and consumed as a service by multiple applications. Previously, individual developers had to take the time and effort to embed analytics capabilities directly within their application, noted Stanek. The GoodData.CM platform makes applications more efficient and, as a consequence, smaller. That is because more analytics processing is offloaded to the headless platform, added Stanek. Pressure to embed analytics in every application is mounting as end users seek to make faster and better fact-based decisions.


Why Technology is More Important Than Ever to Financial Services Organisations

Larger financial institutions have sometimes drawn criticism for their pace with digital innovation, with suggestions that a risk-averse culture impedes innovative new projects. But it is important to note that they are not out of the game. They can still rely on their great customer access, brand cachet and understanding of regulations to compete with nimble challengers. Additionally, data is at the heart of digital transformation, a resource that retail and private banking companies have in abundance. Blending deep, data-powered insight with their powerful human-centric brands gives these organisations an opportunity to create real differentiation when it comes to customer experience. If this is done correctly, they can become smarter, faster and more resilient, while retaining their brand identity. Attitudes are also changing. Some 79% of all organisations in our research now believe traditional business models are being radically disrupted, and that innovation is clearly underway. A further 92% believe that their business embraces change rather than tries to resist it.


The future of work is uniquely human

Re-architecting work is not about simply automating tasks and activities. At its core, it is about configuring work to capitalize on what humans can accomplish when work is based on their strengths. In the survey, executives identified two factors related to human potential as the most transformative for the workplace: building an organizational culture that celebrates growth, adaptability and resilience (45%), and building workforce capability through upskilling, reskilling, and mobility (41%). Leaders should find ways to create a shared sense of purpose that mobilizes people to pull strongly in the same direction as they face the organization’s current and future challenges, whether the mission is, like Delta’s, to keep people connected, or centered on goals such as inclusivity, diversity or transparency. They should trust people to work in ways that allow them to fulfill their potential, offering workers a degree of choice over the work they do to align their passions with organizational needs. And they should embrace the perspective that reimagining work is key to the ability to achieve new and better outcomes—in a world that is itself being constantly reimagined.


Why applied AI requires skills and knowledge beyond data science

“A business problem that can be solved by a model alone is very unusual. Most problems are multifaceted and require an assortment of skills—data pipelines, infrastructure, UX, business risk analysis,” Rochwerger and Pang write in Real World AI. “Put another way, machine learning is only useful when it’s incorporated into a business process, customer experience or product, and actually gets released.” Applied machine learning needs a cross-functional team that includes people from different disciplines and backgrounds. And not all of them are technical. Subject matter experts will need to verify the veracity of training data and the reliability of the model’s inferences. Product managers will need to establish the business objectives and desired outcomes for the machine learning strategy. User researchers will help to validate the model’s performance through interviews with and feedback from end-users of the system. And an ethics team will need to identify sensitive areas where the machine learning models might cause unwanted harm.


Machine learning, explained

Machine learning starts with data — numbers, photos, or text, like bank transactions, pictures of people or even bakery items, repair records, time series data from sensors, or sales reports. The data is gathered and prepared to be used as training data, or the information the machine learning model will be trained on. The more data, the better the program. From there, programmers choose a machine learning model to use, supply the data, and let the computer model train itself to find patterns or make predictions. Over time the human programmer can also tweak the model, including changing its parameters, to help push it toward more accurate results. (Research scientist Janelle Shane’s website AI Weirdness is an entertaining look at how machine learning algorithms learn and how they can get things wrong — as happened when an algorithm tried to generate recipes and created Chocolate Chicken Chicken Cake.) Some data is held out from the training data to be used as evaluation data, which tests how accurate the machine learning model is when it is shown new data. The result is a model that can be used in the future with different sets of data.



Quote for the day:

"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley

Daily Tech Digest - April 24, 2021

An Insight Into Global Payment Technologies With James Booth

With the e-commerce market set to reach a predicted market volume of £92 million by 2025, and the opportunity for cross-border expansion at an all-time high, the demand for more localised and innovative payment methods will only continue to grow. More and more customers are now online, looking for products or services that suit their very specific needs. A shopper might look across borders for what they want: better-quality products, payment methods accepted, stronger brand loyalty, and more. But they will quickly abandon the transaction page if their preferred payment method is not available. Ultimately, payment choice will play a major role in driving sales in the future, meaning merchants will need a diverse payment portfolio to ensure transactions are completed and customer loyalty retained. This will continue to spark increased innovation for payments, but also the proliferation of niche local payment options across the globe. However, as digital payments head towards a global tipping point, the need for greater regulation and security will also continue to grow.


Dealing With Stubbornness Of AI Autonomous Vehicles

Shifting gears, the future of cars entails self-driving cars. This stubbornness element in the flatbed truck tale brings up an interesting facet about self-driving cars and one that few are giving much attention to. First, be aware that true self-driving cars are driven by an AI-based driving system and not by a human driver. Thus, in the case of this flatbed truck scenario, if the car had been a self-driving car, the AI driving system would have been trying to get the car up that ramp and onto the flatbed. Secondly, there are going to be instances wherein a human wants a self-driving car to go someplace, but the AI driving system will “refuse” to do so. I want to clarify that the AI is not somehow sentient since the type of AI being devised today is not in any manner whatsoever approaching sentience. Perhaps far away in the future, we will achieve that kind of AI, but that’s not in the cards right now. This latter point is important because the AI driving system opting to “refuse” to drive someplace is not due to the AI being a sentient being, and instead is merely a programmatic indication that the AI has detected a situation in which it is not programmed to drive.


Rise of APIs brings new security threat vector -- and need for novel defenses

The speed is important. The pandemic has been even more of a challenge for a lot of companies. They had to move to more of a digital experience much faster than they imagined. So speed has become way more prominent. But that speed creates a challenge around safety, right? Speed creates two main things. One is that you have more opportunity to make mistakes. If you ask people to do something very fast because there’s so much business and consumer pressure, sometimes you cut corners and make mistakes. Not deliberately. It’s just as software engineers can never write completely bug-free code. But if you have more bugs in your code because you are moving very, very fast, it creates a greater challenge. So how do you create safety around it? By catching these security bugs and issues much earlier in your software development life cycle (SDLC). If a developer creates a new API and that API could be exploited by a hacker -- because there is a bug in that API around security authentication check -- you have to try to find it in your test cycle and your SDLC. The second way to gain security is by creating a safety net. Even if you find things earlier in your SDLC, it’s impossible to catch everything. 


Will you be heading back to the office? Should you?

The vast majority said it had worked out much better than they expected. They found people were more, rather than less, productive. It turns out folks welcomed not having to deal with long commutes, crowded open-plan offices, or Dilbert-like cubicle farms. Of course, not everyone is happy. Juggling kids and office work can mean misery. But in a recent Blind professional worker social network survey of 3,000 staffers 35% said they would quit their jobs if work from home ends. That's a lot. I'd hate to replace more than a third of my staff if I insisted everyone return to 1 Corporate Drive. If your people want to work from home, and they've shown they can deliver, why take a chance on losing them? Not everyone is on board with the change. As one Microsoft staffer on Blind put it, "I don’t think the 5-day work in the office will ever be relevant again. You will have Team A and Team B, working 2 days in the office and 3 days at home. Social interaction in person is needed." Notice, though, that even here, there's no assumption of a five-day work week.


4 reasons to learn machine learning with JavaScript

Fortunately, not all machine learning applications require expensive servers. Many models can be compressed to run on user devices. And mobile device manufacturers are equipping their devices with chips to support local deep learning inference. But the problem is that Python machine learning is not supported by default on many user devices. MacOS and most versions of Linux come with Python preinstalled, but you still have to install machine learning libraries separately. Windows users must install Python manually. And mobile operating systems have very poor support for Python interpreters. JavaScript, on the other hand, is natively supported by all modern mobile and desktop browsers. This means JavaScript machine learning applications are guaranteed to run on most desktop and mobile devices. Therefore, if your machine learning model runs on JavaScript code in the browser, you can rest assured that it will be accessible to nearly all users. There are already several JavaScript machine learning libraries. An example is TensorFlow.js, the JavaScript version of Google’s famous TensorFlow machine learning and deep learning library.


4 Software QA Metrics To Enhance Dev Quality and Speed

The caliber of code is fundamental to the quality of your product. Through frequent reviews you can assess the health of your software, thus detecting unreliable code and defects in the building blocks of your project. Identifying flaws is going to help you throughout the dev process and well into the future. Good quality code will allow you to reduce the risks of defects and avoid application and website crashes. Today, much of this process can be automated, avoiding human error and diverting resources toward other tasks. But, there are a number of code quality analytics you can focus on. ... Flagging issues in the working process can draw attention to inefficiencies, allowing the opportunity to implement project management solutions. Once flaws are established, there’s a whole host of management software for small businesses and large businesses alike to improve efficiency. Automation can also help you through the testing process. According to PractiTest, 78% of organizations currently use test automation for functional or regression tests. This automation will ultimately save time and money, eliminating human error and allowing resources to be redirected elsewhere in the dev process.


5 Fundamental But Effective IoT Device Security Controls

IoT devices introduce a host of vulnerabilities into organizations’ networks and are often difficult to patch. With more than 30 billion active IoT device connections estimated by 2025, it is imperative information-security professionals find an efficient framework to better monitor and protect IoT devices from being leveraged for distributed denial or service (DDoS), ransomware or even data exfiltration. When the convenience of a doorbell camera, robot vacuum cleaner or cellphone-activated thermostat could potentially wreak financial havoc or threaten physical harm, the security of these devices cannot be taken lightly. We must refocus our cyber-hygiene mindset to view these devices as potential threats to our sensitive data. There are too many examples of threat actors gaining access to a supposedly insignificant IoT device, like the HVAC control system for a global retail chain, only to pivot to other unsecured devices on the same network before reaching valuable sensitive information. While phishing remains the most popular attack vector, reinforcing the need for humans to be an integral part of strong security program, IoT devices now offer another avenue for cybercriminals to access accounts and networks to steal data, conduct reconnaissance and further deploy malware.


Improving model performance through human participation

In order to achieve high-quality human reviews, it is important to set up a well-defined training process for the human agents who will be responsible for reviewing items manually. A well-thought-out training plan and a regular feedback loop for the human agents will help maintain the high-quality bar of the manually reviewed items over time. This rigorous training and feedback loop help minimize human error in addition to helping maintain SLA requirements for per item decisions. Another strategy that is slightly more expensive is to use a best-of-3 approach for each item that is manually reviewed, i.e., use 3 agents to review the same item and take the majority vote from the 3 agents to decide the final outcome. In addition, log the disagreements between the agents so that the teams can retrospect on these disagreements to refine their judging policies. Best practices applicable to microservices apply here as well. This includes appropriate monitoring of the following: End-to-end latency of an item from the time it was received in the system to the time a decision was made on it; Overall health of the agent pool; Volume of items sent for human review; and Hourly statistics on the classification of items.


The challenges of applied machine learning

One of the key challenges of applied machine learning is gathering and organizing the data needed to train models. This is in contrast to scientific research where training data is usually available and the goal is to create the right machine learning model. “When creating AI in the real world, the data used to train the model is far more important than the model itself,” Rochwerger and Pang write in Real World AI. “This is a reversal of the typical paradigm represented by academia, where data science PhDs spend most of their focus and effort on creating new models. But the data used to train models in academia are only meant to prove the functionality of the model, not solve real problems. Out in the real world, high-quality and accurate data that can be used to train a working model is incredibly tricky to collect.” In many applied machine learning applications, public datasets are not useful for training models. You need to either gather your own data or buy them from a third party. Both options have their own set of challenges. For instance, in the herbicide surveillance scenario mentioned earlier, the organization will need to capture a lot of images of crops and weeds.


Window Snyder Launches Startup to Fill IoT Security Gaps

In the connected device market, she sees a large attack surface and small security investment. "There are so many devices out there that don't have any of these mechanisms in place," she explains. "Even for those that do have security mechanisms, not all of them are built to the kind of resilience that's appropriate for the threats they're up against." It's a big problem with multiple reasons. Some organizations have small engineering teams and few resources to build resilience into their products. Some have large teams but don't prioritize security because they're in a closed-system manufacturing operation, for example, and the machines don't have network access. Many connected devices are in the field for long periods of time and it's hard to deliver updates, so manufacturers don't ship them unless they have to. "There's this combination of both security need and then additionally this requirement for an update mechanism that is reliable," Snyder continues. Oftentimes manufacturers lack confidence in how updates are deployed and don't trust the mechanism will deliver medium- or high-severity security updates on a regular basis.



Quote for the day:

"Authority without wisdom is like a heavy ax without an edge -- fitter to bruise than polish." -- Anne Bradstreet

Daily Tech Digest - April 23, 2021

Advanced anomaly detection: how to defeat ransomware

With perimeter defenses increasingly becoming a thing of the past, attack surfaces increasing, and adversaries becoming more capable, a managed threat detection and response (MDR) model has piqued interest in major industries. A crucial difference between MDR and traditional ransomware defenses, is MDR’s proactive response to threats. MDR is a powerful managed security service that combines threat intelligence, threat hunting, security monitoring, incident analysis, and incident response. It leverages telemetry on endpoints, monitors user behaviors, and helps produce a data-driven baseline of a business’ ‘normal’ activities, whether on premises or in the cloud. Essentially, it couples the best detection technologies and security expertise to seek out and eliminate threats before catastrophic damage occurs. Ransomware protection has been critical for businesses, especially during the pandemic. COVID-19 has proven to be a nightmare for assessing what ‘normal’ behavior looks like for organizations. Most companies lacked contingencies for adapting to the pandemic. 


Low-code and no-code won't kill developer jobs, here's why

The fact is low-code and no-code has been a term for probably 15 years, if not more in one way or another. I think I remember trying to write my first website in a low-code front page application, but what did I do? The second I did that I had to jump into the code, the HTML code to actually make it work. But we are at a different time, I think in really a unique time where we have a broad base of the workforce, the majority of the workforce now is the millennial generation or lower. So we have a younger workforce that actually grew up with technology and they've used it day in and day out. We don't really think of it as, 'Oh, well, you had apps and phones,' but that familiarity with technology has given a technical or literacy that just comes with today's day and age. Now, if you accompany that with the fact that low-code platforms are much more powerful than they were before, you have a perfect union of people who just want to get stuff done and configure out technology if you give it to them, and technology that is powerful enough, yet simple enough to leverage to really innovate on. Now, there is something you mentioned Bill, that is really important, which is enterprises have to be bought into this.


Juniper: Managing the complexity of future networks

You’ll see more self-healing, self-configuring and provisioning. Day 2 operations will be seamless, and self-correcting work will be all done in software automatically. In many ways we have already achieved these capabilities with Mist and our Wi-Fi technology that has a self-correcting mechanism. In the data center, operations will be driven by automation to eliminate errors, and find and correct particular problems. Our focus on AI has been a real shot in the arm for the company and our customers. As we pull more and more telemetry from our routers and switches, automation and AI will drive a lot more functionality into our software. The data gathered by telemetry is king. You need that kind of data to gain insights into what’s going on, how devices are working and software. You find out how the network is operating with packet capturing and the state of the cloud network, and then look for deviations. In our case, [Juniper’s AI-powered virtual assistant] Marvis in 2019 learned of network problems and could solve 20% of them without intervention. Now that number is over 80% of problems solvable automatically, in part due to all of the intelligent telemetry it gathers.


What is Blockchain? We explain the technology of blockchains

The blockchain is a constantly growing list of information. That information is in blocks, and all these blocks are linked together. Each block matches the preceding and following, and the information that the middle block contains is encrypted by an algorithm using a cryptographic function called hash. This makes this information inviolable. It is a secure, open and public database. To illustrate how the blockchain works, the metaphor of a ledger distributed among many people is often used. It would be a great book where digital events are recorded. The fundamental thing here is that this book is "distributed", that is, shared between many different parts (nodes). It can only be updated from the consensus of the majority of the system participants and, once entered, the information can never be deleted. The Bitcoin blockchain , for example, contains an accurate and verifiable record of all the transactions that have been made in its history. In other words, the authenticity of the Blockchain is not verified by a third party, but by the consensus of the whole: it is the same network of users that participates in it.


Europe lays out plan for risk-based AI rules to boost trust and uptake

The planned law is intended to apply to any company selling an AI product or service into the EU, not just to EU-based companies and individuals — so, as with the EU’s data protection regime, it will be extraterritorial in scope. The overarching goal for EU lawmakers is to foster public trust in how AI is implemented to help boost uptake of the technology. Senior Commission officials talk about wanting to develop an “excellence ecosystem” that’s aligned with European values. “Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it,” said Commission EVP, Margrethe Vestager, announcing adoption of the proposal at a press conference. “On the one hand, our regulation addresses the human and societal risks associated with specific uses of AI. This is to create trust. On the other hand, our coordinated plan outlines the necessary steps that Member States should take to boost investments and innovation. To guarantee excellence. All this, to ensure that we strengthen the uptake of AI across Europe.”


Conversation about crossgen2

Crossgen2 is an exciting new platform addition and part of the .NET 6 release. It is a new tool that enables both generating and optimizing code in a new way. The crossgen2 project is a significant effort, and is the focus of multiple engineers. I thought it might be interesting to try a more conversational approach to exploring new features. ... Crossgen’s pedigree comes from the early .NET Framework days. Its implementation is tightly coupled with the runtime (it essentially is just the runtime and JIT attached to a PE file emitter). We are building a new version of Crossgen – Crossgen 2 – which starts with a new code base architected to be a compiler that can perform analysis and optimizations not possible with the previous version. ... As the .NET Core project became more mature and we saw usage grow across multiple application scenarios, we realized that crossgen’s limitation of only really being able to produce native code of one flavor with one set of characteristics was going to be a big problem. For example, we might want to generate code with different characteristics for Windows desktop on one hand and Linux containers on the other. The need for that level of code generation diversity is what motivated the project.


Machine Learning with ML.NET – NLP with BERT

Language is sequential data. Basically, you can observe it as a stream of words, where the meaning of each word is depending on the words that came before it and from the words that come after it. That is why computers have such a hard time understanding language because in order to understand one word you need a context. Also, sometimes as the output, you need to provide a sequence of data (words) as well. A good example to demonstrate this is the translation of English into Serbian. As an input to the algorithm, we use a sequence of words and for the output, we need to provide a sequence as well. ... During the training, process Encoder is supplied with word embeddings from the English language. Computers don’t understand words, they understand numbers and matrixes (set of numbers). That is why we convert words into some vector space, meaning we assign certain vectors (map them to some latent vector space) to each word in the language. These are word embeddings. There are many available word embeddings like Word2Vec. However, the position of the word in the sentence is also important for the context. 


How micro-segmentation creates an uphill battle for intruders

To determine just how effective micro-segmentation can be, Illumio conducted a red team exercise with Bishop Fox. The team was tasked with finding “crown jewel” assets in a test environment, and while they did not face a defensive blue team, they were pitted against increasingly tight micro-segmentation policies. The first and lowest level policy tested was environmental separation. This is a fairly course-grained approach where workloads in different environments, such as production, testing, or development, can only connect with others in the same environment. It quickly became clear that even this simple level of separation could cause attackers to take at least three times as long to reach their target. This 300-percent increase in difficulty for the intruder meant defensive tools and security personnel had much more time to detect and investigate signs of unusual activity. The next level of micro-segmentation, application ringfencing, proved to be even more effective, creating a 450-percent increase in difficulty for the attacker. At this stage, only workloads associated with specific applications could talk to each other.


Quantum: It's still not clear what it’s good for

The entire quantum industry is "still finding its way to what applications are really useful," he said. "You tend to see this list of potential applications, a heralded era of quantum computing, but I don't think we really know," he said. The Qatalyst software from QCI focuses on the kinds of problems that are of perennial interest, generally in the category of optimization, particularly constrained optimization, where a solution to a given loss function or objective function is made more complicated by having to narrow the solution to a bunch of variables that have a constraint of some sort enforced, such as bounded values. ... "They are described at a high level as the traveling salesman problem, where you have multi-variate sort of outcomes," said Liscouski. "But it's supply-chain logistics, it's inventory management, it's scheduling, it's things that businesses do today that quantum can really accelerate the outcomes in the very near future." Such problems are "a very important use case," said Moulds. Quantum computers are "potentially good at narrowing the field in problem spaces, searching through large potential combinations in a wide variety of optimization problems," he said.


Zuzana Šochová on Becoming an Agile Leader

Agile at the organizational level is changing the DNA of organizations; it brings higher autonomy of creative, innovative, and collaborative teams that are better designed to deal with complexity and the unpredictability of the VUCA challenges. It needs flexibility and quick responses to change. It breaks all fundamental beliefs that classical management was built on top of, and creates a strong need for changing leadership. Dynamic structures with no fixed design are hard to manage the traditional way, and growth of emergent leadership is inevitable. Agile leaders are catalyst and servant leaders; they are role models of a new way of working. They coach, mentor, and encourage others to become agile leaders as well. Being an agile leader is a journey, and agile leaders need to focus on helping other leaders around them grow to make agility as a whole sustainable. Having a critical mass of agile leadership is crucial for any agile environment; without it, we are only creating another process and adding terminology, and all we get is “fake agile,” not business results.



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - April 22, 2021

CISA Orders Agencies to Mitigate Pulse Secure VPN Risks

CISA is ordering agencies to use the Pulse Connect Secure Integrity Tool to check the integrity of file systems and take further action as necessary. Ivanti developed the tool, which helps organizations determine if malicious activity is taking place. "CISA has determined that this exploitation of Pulse Connect Secure products poses an unacceptable risk to federal civilian executive branch agencies and requires emergency action," according to the emergency directive. "This determination is based on the current exploitation of these vulnerabilities by threat actors in external network environments, the likelihood of the vulnerabilities being exploited, the prevalence of the affected software in the federal enterprise, the high potential for a compromise of agency information systems, and the potential impact of a successful compromise." The Biden administration has been responding to a series of security incidents, including the SolarWinds supply chain attack, which led to follow-on attacks on nine government agencies and 100 companies and exploits of flaws in on-premises Microsoft Exchange email servers.


Why DevSecOps Should Strive for Effective Enforcement Measures

Applications today – especially in modern development environments – extensively use APIs to share and consume sensitive data, which are just as vulnerable and require dedicated surgical technology to make sure there is no token abuse, excessive utilization, or data theft using injections. Other than API security Many services rely on integrating or serving bots and need to make a clear distinction between the good bots and bots with malicious intent. For the sake of being accepted by AD&D, RASP is vulnerable to some attacks denial of service is just one example. From a DevOps point of view, applying security enforcement is risky. It can affect the user experience or maybe even break the flow, leading to runtime errors. The software development lifecycle (SDLC) has many blind spots in security, especially in today’s hybrid, multi-cloud architecture. For this very reason, many technologies provide alerts which is great. There is some fatigue from tools that only provide visibility. Automated security testing, vulnerability scanners of webservers, Operating Systems, and even container images come short on actual enforcement, making the developer take a few steps back and patch. When such alerts come in mass, it is much harder to prioritize and address them all.


The strange bedfellows of AI and ethics

There is a tendency to assume that computers cannot be biased – but that is not the case. AI-based systems learn from the data that they are fed. If we feed them the “wrong” data, we can inadvertently build in biases that we may not even notice. For example, historically, there have been more men than women in technology jobs. It is a very short step from that data to a position where a hiring algorithm learns that men are more likely to do a technology job, and then “decides” that men must be better than women in those jobs. The good news is that we can manage this. We can, and should, be aware of our own biases. However, we should also build diverse teams to work with AI, as a way of ensuring that we surface more of the inadvertent biases – the ones that we don’t even notice because they have become norms. It is not going to be enough to respond to developments in AI. We need to be proactive in setting up ethical safeguards to protect us all. A recent webcast from SAS Canada on AI and ethics recommends that organisations should develop a code of conduct around AI and foster AI literacy. They should also establish a diverse ethics committee to manage and oversee development and implementation processes.


REvil Ransomware Gang Threatens Stolen Apple Blueprint Leak

The extortion threat was unveiled Tuesday, hours before Apple was scheduled to make a series of major new product announcements. REvil published a number of alleged blueprints for Apple devices, which it claimed to have stolen from Taiwanese manufacturer Quanta Computer, which builds computing devices for a number of vendors. "In order not to wait for the upcoming Apple presentations, today we, the REvil group, will provide data on the upcoming releases of the company so beloved by many," the REvil gang says in a post to its data leak site. "Tim Cook can say thank you Quanta," it adds, referring to Apple's CEO. REvil claims that its previous ransom demands have been rebuffed by Quanta. "From our side, a lot of time has been devoted to solving this problem. Quanta has made it clear to us that it does not care about the data of its customers and employees, thereby allowing the publication and sale of all data we have," REvil says. Quanta and Apple didn't immediately respond to a request for comment. REvil's data-leak site further lists Cisco, Dell, HP, Siemens, Sony and Toshiba as being among the other manufacturers with which Quanta works.


Five Habits Of Highly Successful COOs

The best COOs are effective at building trust with their CEO. This trust allows them to be brutally honest with the leader of their company and gives the endless ideas created by the CEO a filter. This is not No. 1 by accident. The foundation of any great CEO and COO relationship is trust, and all the successful COO I’ve seen have a track record of building genuine trust with their CEOs and with prior teammates before climbing the ranks to second in command. This allows the CEO to confidently pass anything off of his/her plate to the COO so they can focus on the tasks that are the highest and best use of the CEO’s time. One of the most common key responsibilities of the COO is to attract, hire and retain high performers. The COO is basically the hub of the organization and it’s critical they have their finger on the talent pulse. The best-in-class COOs are always hiring. They understand that hiring top talent is one of the most important functions of the company. In addition to hiring high performers, they also spend significant time developing their highest performers. It can be so easy to focus your time and attention on only the lowest performers, but the most effective COOs take the time to continue developing the top 20% in addition to the rest of the team.


Advice for Aspiring Data Scientists

Some ideas for what to include in your portfolio: analyses, code gists, webapps, data documentation and blogs (+ README files!). You don’t need all of these by any means but if I had to choose two, I’d choose a webapp and accompanying blog post. A webapp is a great way to show your ability to link together different pieces of software and create something dynamic, hosted on the web. But why a blog? As I argued in my last post, communication is one of, if not the most important aspects of your job as a data scientist. Written communication is especially vital, and even more so if your job is remote. A well-written blog post (with linked code) allows the reader to get a sense of how you communicate, code, and think. If they get good signal from this, they will want to talk with you. This matters because getting your resume looked at is the hardest step in the job search process, so if you can increase your chances of conversion here, you’ll be in a great place. You may now be wondering how to get inspiration for your portfolio. What about starting with a cool dataset you see referenced on Twitter or Kaggle? Are there any data quality issues like leakage, truncation, missing data? How do they impact an analysis?


Cloud archiving: A perfect use case, but beware costs and egress issues

There are still issues that may inhibit the move to the cloud. While there are many examples of companies that want the move to boost operating expenditure and cut down on capital expenditure, there are instances of organisations that want to maintain the latter for accountancy reasons. And, says Betts, there are organisations that have pulled everything back from the cloud because it’s easier to control costs. Some companies have been reluctant to move to the cloud for off-site archiving because of a perceived lack of cloud skills – this may apply particularly to small and medium-sized enterprises (SMEs). But, as Betts points out, there’s still a need for skills if they’re going to implement an on-premise policy, so it’s not such a straightforward swap. SMEs may well lack some of these specialist skills too, and may find it particularly the case when adhering to GDPR compliance. It is clear there are plenty of advantages to archiving in the cloud. By freeing CIOs from the pain of choosing a hardware medium for long-term storage, moving to the cloud offers greater flexibility.


A Reference Architecture for Fine-Grained Access Management on the Cloud

The key insight underpinning this architecture is the delegation of user authentication to a single service (the Access Controller) rather than placing that responsibility with each service to which the user may need access. This kind of federation is commonplace in the world of SaaS applications. Having a single service be responsible for authentication simplifies user provisioning and de-provisioning for application owners and accelerates application development. The Access Controller itself will typically integrate with an identity provider, such as Auth0 or Okta, for the actual authentication sequence, thus providing a useful abstraction across a wide array of providers and protocols. Ultimately, the identity provider guarantees non-repudiation of the user’s identity in the form of a signed SAML assertion, a JWT token, or an ephemeral certificate. This obviates the need to rely on a trusted subnet as a proxy for the user’s identity. It also allows configuring access policies down to the granularity of a service unlike VPNs which permissively grant users access to all services on the network.


Why Big Data is Crucial for Agricultural Growth

Big data technologies have significantly increased the amount of information modern farmers possess for enhancing the efficiency of agricultural production. But what’s even more important than collecting and analyzing data is the ability to pull out the most important insights from it. The large number of variables affecting crop yield creates a wide range of possibilities for interpretation. This includes data on crop health, growth uniformity, stage of growth, etc. Having all of this data automatically collected and analyzed in one online tool enables farmers to make the most accurate predictions on crop yields. Such tools can use different algorithms for assessing the yield potential taking into account weather conditions, historical yield data, and other necessary information. Based on yield forecasts, farmers can timely perform field activities to impact it, plan its storage and sales. Ultimately, yield prediction enables growers to decide which crop to plant, where, and when, based on the accurate analysis of historical and current data. Environmental threats and global climate change have a huge impact on the agricultural sphere.


Building Confidence with Data Resilience

The first step in any digital transformation journey starts with the data and the development of a foundational storage layer. Resilience starts with data, too. It is the fuel that drives the company and it permeates every aspect of the technical infrastructure, from storage to AI, across the hybrid cloud, from the core data center to the edge. Lose data and you can lose your ability to function and, often, lose money. A recent study by KPMG found that cyber security risk will pose the greatest threat to a company’s growth over the next three years. The KPMG 2021 CEO Outlook Pulse Survey surveyed 500 CEOs in 11 markets around the world. Organizations like Pitney-Bowes, the University of California, San Francisco, and the many others are living proof of the risks. But breaches tell only part of the story. According to a recent report by Harvard Business Review, the mean time it took businesses in 2019 to discover a cyberattack was 196 days. Cloud migrations are only compounding the challenge and risk by 51%, according to the report. The point is, for most organizations, the problem is not only losing data and vital corporate information, but also not realizing it for six months.



Quote for the day:

"Always remember that you are absolutely unique. Just like everyone else." -- Margaret Mead

Daily Tech Digest - April 21, 2021

UK’s IoT ‘security by design’ law will cover smartphones too

The government introduced a security code of practice for IoT device manufacturers back in 2018 — but the forthcoming legislation is intended to build on that with a set of legally binding requirements. A draft law was aired by ministers in 2019 — with the government focused on IoT devices, such as webcams and baby monitors, which have often been associated with the most egregious device security practices. Its plan now is for virtually all smart devices to be covered by legally binding security requirements, with the government pointing to research from consumer group “Which?” that found that a third of people kept their last phone for four years, while some brands only offer security updates for just over two years. The forthcoming legislation will require smartphone and device makers like Apple and Samsung to inform customers of the duration of time for which a device will receive software updates at the point of sale. It will also ban manufacturers from using universal default passwords (such as “password” or “admin”), which are often preset in a device’s factory settings and easily guessable — making them meaningless in security terms.


MLOps Vs Data Engineering: A Guide For The Perplexed

Data engineering involves designing and building pipelines to transform data to a format end-users can understand (mainly data scientists). The pipelines collect data from different sources in a single warehouse. The data engineering job has been around for over a decade, ever since databases, SQL servers, and ETL burst into the scene. But data engineering, as we know it, gained currency at the beginning of the last decade. Companies realised they were sitting on goldmines of data, and software engineers, with the right tools, can leverage this data to drive business processes. Data engineering moved away from traditional ETL tools and developed new ones to handle swathes of data. Data engineering focuses on aspects such as data infrastructure, data warehousing, data mining, data crunching, metadata management, and data modelling. ... A survey by the International Data Corporation showed that most AI/ML projects don’t go into production, primarily because the expectations are not well communicated to the businesses or lack of skill in maintaining the production models.


A fresh recipe for digital-driven work innovation

Some years ago, we examined technology and industry trends and quickly realized that the future would be automated, with RPA’s exciting promise of enabling more work to be done much faster and better - with less. We quickly realized that the key to working better involves strategically applying a combination of RPA and AI ingredients as a recipe to liberate the best from humans, technologies and data - and then unleash this combined potential across enterprise operations. We also realized that to address their actual goals or problems, at pace and on a greater scale, organizations needed to start with defining a human intelligence strategy. We’re talking about blending the best of human, AI, RPA, systems intelligence and data as a single capability to then be conceived, conceptualized and delivered as ‘enhanced work’ solutions. We call it the ‘Connected Intelligence’ model and this uniquely innovative approach releases major commercial value in months - rather than years. This model works in the following interdependent ways. RPA runs smart software robots to provide a rapid processing function, performing joined up, data-driven, work– and at an unmatched speed, accuracy and integrity - 24x7.


Moving from Collocated Management to Remote Leadership

Remote-first is different from remote-friendly in every way. A remote-first culture fosters individual freedom and empowers collaboration through actual inclusion and equity in all parts of the work process, while a remote-friendly approach finds ways (usually the bare minimum such as video conferencing or a chat tool) to create the illusion of inclusion and collaboration, while remote workers often feel excluded, not-trusted or left out. Remote workers feeling left out in hybrid teams is a familiar concept for many, but this emotion can also be felt when the team is fully remote and operates with a collocated mindset, i.e. a remote-friendly approach. Some organizations have begun using the term “hybrid” to describe an environment in which people have a mandate to go to an office from time to time. However, the location from which people work is an entirely different topic than the operational implications of building a strong remote-first culture that is inclusive of all people regardless of their location, whether it be in an office or not.


Explained: Why soft skills are important to excel in your career?

Coordination and leadership abilities play a vital role in a modern workplace. It is crucial to address the lack of soft skills in individuals via training programs adapted to suit different sectors and levels. It is imperative to develop interpersonal skills, so that workplaces remain relevant in the challenging market, especially in the insurance industry which relies on personal equity/face-to-face communication. Effective communication enables you to build relations with people and is the key to your success in this highly competitive sector. It also helps in time management. Customer service, persuasion, and negotiation skills are the strengths individuals require in the insurance sector. Dedication and consistency in work help in boosting the team's morale and efficiency. With technology leading the world today, it is crucial that you adapt your skills as per the requirements and needs of the hour. Communication is the key to success. Enthusiasm, honesty, and conviction must complement technology. Machines can only do so much; the rest is up to human skills.


Seeing the Bigger Picture With BizDevSecOps

IT staff is under mounting pressure to perform, and they may not have the correct elements in place to support elegant and profitable applications, said Ostrowski. When it comes to monitoring, in particular, alert fatigue could especially hamper innovation attempts and make it difficult to know what to prioritize. Instead, Ostrowski advocates for letting tangible business outcomes, such as conversion rates and performance, drive development. To correlate the impact on revenue more directly, he encourages “giving business leaders a seat at table during the entire CI/CD pipeline.” One aspect to roping in business, security, development and operations is unified observability across the entire stack. But wouldn’t it be challenging to give business leaders a seat within a technical software delivery process? Well, this wouldn’t be the first time business has directed technological change. Ostrowski recalls the mid–2000s and the dawn of the mobile market. During this period, IT became consumerized for laypeople, with glossy apps and impressively intuitive experiences. He recalls a general loss of faith in IT staff, as workers often had “better tech at home than they did in the office.” 


Securing vehicles from potential cybersecurity threats

Computers in cars are not a new development, but they are controlling more and more of what is happening with them and inside them. Vehicle control is now, for example, completely computerized, and attackers could take over the level 2 ADAS systems and consequently gain full control over a vehicle’s safety-related functionalities. “Today’s sophisticated connected vehicle architecture is inherently more vulnerable to cyber attacks. Connected vehicles can host up to 150 electronic control units and run on 100 million lines of code; tomorrow’s vehicles may contain up to 300 million lines of software code. Cyber attacks that exploit the increasing digitization of vehicles present a significant risk to manufacturers, vehicle owners, other drivers and pedestrians,” Meron noted. “Each OEM tries to come up with their own defense strategy, using the variety of tools available in the market to protect from different attack vectors. Eventually they all need to manage cybersecurity of the vehicle throughout its lifecycle, from the very first day the design process commences, through production and maintenance of the vehicle, until decommissioning.”


Cyber resilience: your last line of defence

By definition, cyber resilience measures an organisation’s strength in preparing for, operating through and recovering from an attack. Only a holistic security programme will assure the resilience of an organisation and that of its customers before, during and after adverse events. Quickly identifying, responding to and recovering from security incidents is key. To achieve this, cyber resilience must rest on people and processes, as well as a combination of technologies. When assessing their security posture, businesses should look for gaps in their security capabilities from a people, processes and technology perspective, and take steps to address these. For example, if staff lack security know-how, can this be fixed by hiring or developing dedicated security experts? And how can we use training to build enhanced security awareness throughout the organisation? Processes should be clearly defined to deliver the desired security outcomes and must be repeatable and measurable. For most organisations, pinpointing weaknesses and making improvements to their processes will be an iterative journey, which should be kept under constant review.


3 Factors That Should be Driving Your Digital Transformation

Digitalization projects are a prime opportunity to finally take up some long overdue “deferred maintenance,” those pending initiatives you’ve had sitting on the shelf for months or years -- not because they aren’t important, but because they weren’t absolute priorities at the time and were consequently put off. Folding in deferred maintenance projects as part of your bigger digital transformation effort is a great way to further maximize the value of that initiative. The more projects you can execute simultaneously, the bigger the benefits. Anyone can do a technical migration or upgrade, and in vacuum, those smaller-scale endeavors won’t necessarily deliver the ROI you’d expect. But if you can combine multiple projects at once -- pairing a cloud migration with, for example, an OS/DB modernization project and the retirement of a legacy platform, all the same time -- the greater value you create, and the higher the ROI. Coalescing multiple projects into one also has other compounding benefits, from reduced testing and disruption to less downtime. All of that feeds into the project’s bottom line, making it more attractive to the business and valuable in the long run.


Nation-State Actor Linked to Pulse Secure Attacks

"Their primary goals are maintaining long-term access to networks, collecting credentials and stealing proprietary data," Charles Carmakal, senior vice president and CTO at FireEye Mandiant, says of the attackers. "We believe that multiple cyberespionage groups are using these exploits and tools, and there are some similarities between portions of this activity and a Chinese actor we call APT5." The attackers have been exploiting these vulnerabilities to compromise U.S. government agencies, critical infrastructure and private sector organizations, CISA says. FireEye adds the attacks are global, hitting a variety of government and private institutions. "The investigation shows ongoing attempts to exploit four issues: The substantial bulk of these issues involve three vulnerabilities that were patched in 2019 and 2020. Customers are strongly recommended to review the advisories and follow the guidance, including changing all passwords in the environment if impacted," Ivanti says.



Quote for the day:

"And the attitude of faith is the very opposite of clinging to belief, of holding on." -- Alan Watts

Daily Tech Digest - April 20, 2021

How to write a cyberthreat report executives can really use

Although there’s no single template for crafting a threat report, “it should look like whatever you think people will read," says deGrazia. "Senior managers get hit with lots and lots of paper, so whatever format it’s in, it has to get their attention.” CISOs also need to consider how often they want to generate these reports. Security leaders say the reports should come out on a regular schedule, whether they’re passed out weekly as Stebila did, monthly, or quarterly. The best schedule is one that matches the organization’s own cultural tempo, Rawlins says, adding that CISOs could also create and distribute customized reports to different recipients on different schedules based on the varying levels of threats and interest levels each party has. CISOs could, for example, share reports weekly with their CIOs but distribute them to the board only semi-annually. That regular schedule should not preclude sending out threat reports in response to urgent issues, security experts say. “You can’t ignore the fact that things come up, and come up quickly, and those things need to be communicated up the chain as quickly as possible,” deGrazia adds.


Consumer data protection is a high priority, but there’s still work to be done

“Most pertinently, it’s encouraging that consumer data protection is such a high priority for organizations, but there is clearly some work to be done in turning that priority into a reality in terms of what data is actually encrypted and at what points in the data lifecycle. It’s also apparent that organizations of all shapes and sizes are looking to adopt encryption for a range of new and cutting-edge use cases, which will no doubt continue to drive innovation in the industry.” “IT is tasked with deploying, tracking and managing encryption and security policy across on-premise, cloud, multi-cloud and hybrid environments, for an expanding array of uses cases, and amidst widening threats. Encryption is essential for protecting company and customer data, but managing encryption and protecting the associated secret keys are rising pain points as organizations engage multiple cloud services for critical functions,” added Grimm. “Rising use of HSMs for encryption and key management shows that IT is starting the meet these challenges. Organizations will benefit from a growing and ecosystem of integrated solutions for cloud security policy management ...”


The security impact of shadow IT

Shadow IT is also one of the reasons why strict compliance-based approaches to cyber security can only help you so far. If you are measuring patching of your internal systems as a security key performance indicator (KPI), for example, then you need to be conscious that if you have a 99% success rate at patching servers, an adversary will probably find that 1% of servers you have not patched. And if you have a 100% success rate at patching servers, you absolutely have to make sure that every server that exists is part of that measurement – if you have a server which is not enrolled in asset management and therefore not monitored in patch management processes, you could still be exposed and not be aware of it. We talk about the “advanced persistent threat” a lot in security, and it is easy to get hung up on the “advanced” part of that epithet. Although “advanced” is dangerous, what we should be most concerned about is “persistent”. You may have thousands of servers properly enrolled in your technical controls, fully-monitored and fully-patched – and one undocumented server which is not patched and not monitored.


Business Process Automation at Scale Is Key to Customer and Employee Experience

“The electronic signature is often where folks start,” asserted Casey of DocuSign. “I think that’s wonderful, obviously. But we have also started to step back and think about the systems of agreement that businesses have as a whole.” ... “Sure, automation will always cut costs—but we want to consider the experience. That’s what’s durable,” said Casey. During COVID, short-term fixes on the ground were prioritized over long-term solutions with high-level, lasting impacts. Now, the tide is beginning to shift. The benefits of full-scale automation—like better customer experiences, business agility, increased productivity, and greater security—are clearer than ever before. But what does strategic end-to-end automation look like in practice? ... Automating at scale is both technical science and change management art. For instance, close to 50% of businesses today claim that they are prepared to invest in an automated, end-to-end contract management solution, but simply don’t have the tools or know-how to do it effectively. “The problem is that end-to-end automation requires a lot of technology,” said Koplowitz.


The clash over the hybrid workplace experience

To optimize the employee experience of their hybrid workforce, employers should focus on "digital parity" as well as employee "experience parity," according to IDC. Digital parity refers to the requirement that all workers have secure access to the resources required to do their jobs, no matter their preferred device or location (office/remote/in the field). Experience parity means a democratized workplace, where all employees have the opportunity to collaborate, learn, develop, innovate and succeed, the report said. ... "Businesses everywhere must place a greater priority on enhancing employee experiences, which in turn will drive higher productivity, collaboration and better customer outcomes," said Leon Gilbert, senior vice president and general manager, Digital Workplace Services, Unisys, in a statement. "Organizations that adapt to provide digital and experience parity will not only retain employees in a competitive marketplace but will also empower those employees to provide the best service possible to their organization's customers. Do it well and you drive engagement, productivity and adaptability as new workforce demands emerge."


TCP/IP stack vulnerabilities threaten IoT devices

The actual danger to which an organization is exposed differs based on which of the vulnerable stacks it’s using. The FreeBSD vulnerability is likely more widespread – it affects millions of IT networks, including Netflix and Yahoo, as well as traditional networking devices like firewalls and routers, according to the report, but is likely easier to fix. “Those are manageable systems – we should be able to update them,” said Forrester senior analyst Brian Kime. “[And] they should be prioritized for remediation, because they’re part of your network stack.” The same cannot be said, in many cases, of the real-time operating systems affected by Name:Wreck, since the standard issues that make securing IoT devices remain in play here. The ability to patch and update firmware is still not a standard feature, and the OEMs of connected devices – which may be quite old, and may not have been designed to be Internet-facing in the first place – might not even be operating any more. In cases where those IoT devices are vulnerable, strong security has to start at the network layer, according to Hanselman. Monitoring the network directly for anomalous activity – which, again, can sometimes be difficult to detect in the case of a TCP/IP vulnerability – is a good start, but what’s really needed is techniques like DNS query protection.


The Four Fs of employee experience

To deliver an optimal employee experience (EX), we recommend focusing on four principles that we call the Four Fs. They are a set of heuristics inspired by the user-centric, iterative practice of design thinking, and they rest on the idea that your business goals, experiences, and technology are inseparable from one another and must be addressed in a unified, cross-company way. We refer to this approach as BXT (for business, experience, and technology). When applied to EX, the Four Fs unlock productivity and cut down on energy-sapping frustration stemming from internal systems and tools. They are the form, flow, feeling, and function of an employee’s work life. ... Employees can’t do their jobs well if they don’t understand what is being asked of them, the purpose of the work, or how they should prioritize their tasks. A firm we advised recently had received feedback from staff that the online training module for a new marketing curriculum it had developed was hard to follow and a bad experience overall. To address the problem, the company’s user experience team worked with PwC and a leading software firm to reimagine the employee learning interface.


Building a learning culture that drives business forward

We all think we have it. So we might say, “I’m a fast learner” or “I’m a slow learner” or “I learn in this way or that way.” But, actually, a lot of the underlying research—there are several strands of research—shows that people can actually build skills to learn new skills. We think of this as one of the most fundamental capabilities that a person can develop for themselves. It makes you better at getting better at things. It makes you better able to adapt to the changing environment that we all face these days. This idea of learning as a skill, in and of itself, is a fundamental one, and one that we talk to a lot of our clients about and, frankly, a lot of our colleagues as well. Because they’re also curious. They want to learn. But they need to be taught. Back in school, you might have thought about this as study skills. How do I organize myself in order to get my schoolwork done? But there’s a much more sophisticated version of that when you think about adult learners that I think we all need to invest in more. ... Learning to follow is listening before talking and learning how to be a contributor so that you can then lead. There are a few ways you can learn how to follow.


Concerns grow over digital threats faced from former employees

"A lot of companies fail to have clear policies or a checklist that employers use for post-employee separation. This is extremely important because failing to do so is going to involve a lot of things but the most important thing is that you want to make sure that the former employee or even a subcontractor that previously had access to the organization's technologies and systems is completely locked out," Guccione said in an interview. "It's going to avoid the risk of business disruption. It's going to avoid the risk of the leakage of intellectual property or trade secrets. It also mitigates legal risk because what you don't want is any exposure of or unauthorized access to sensitive data about the organization or its stakeholders. If a door is left open to a former member of the team and that person is disgruntled, you could have a real problem on your hands." ... In December, the Justice Department announced that a former Cisco worker was sentenced to two years in prison after he accessed the Cisco Systems cloud infrastructure that was hosted by Amazon Web Services and deleted 456 virtual machines for Cisco's Webex Teams application.


MLOps, An Insider’s Perspective: Interview With Nikhil Dhawan

Large tech firms have used data science and its various techniques to learn about consumer behaviour for a long time. They have optimised their recommendation engines, have bundled products together, improved targeting for the right customers, increased the basket size and so on. They had the budget to dedicate resources for research, partnership with academic institutes that focused highly on statistical knowledge and theory. They also had a significant engineering function to build infrastructure and tooling required to build on research outcomes. Smaller or business-focused firms don’t have this luxury. There is a big task list on any data science project that ranges from data acquisition, data ingestion, determining or starting with initial algorithms, testing multiple variants including tuning the model and hyperparameters, preparation of the datasets for each experiment, validating and comparing the outputs etc. Finally, once we get the best possible trained model, the engineering task is to deploy the model to score or predict on live data to improve business functions.



Quote for the day:

"A leader does not deserve the name unless he is willing occasionally to stand alone." -- Henry A. Kissinger