The problem with AI? People
This possibility for computers to make bad decisions is complicated by the data being fed into them by people who are biased themselves, as Rishidot founder Krishnan Subramanian has highlighted: "[T]here is very little diversity among people building these AI algorithms." This can be mitigated through conscious efforts to hire diverse data engineers and scientists, but it's a tricky conundrum. It's made all the trickier because people (whether those building the AI models or not) are influenced by the data coming from the machines. In this way, we can become ever more distant from raw data, and ever more incapable of giving good data to our models, as Manjunath Bhat has written: "People consume facts in the form of data. However, data can be mutated, transformed and altered--all in the name of making it easy to consume. We have no option than but to live within the confines of a highly contextualized view of the world." Catch that nuance? We rely on ever-increasing quantities of data to make decisions, but that data is just as increasingly mediated by machines that try to spoon-feed it to us in ways that make it easier to consume.
Liberating Structures - an Antidote to Zombie Scrum
Scrum is a simple, yet sufficient framework for complex product delivery. It helps organizations thrive on complexity. Scrum provides the minimal boundaries for teams to self-organize and solve complex problems with an empirical approach. However, we’ve noticed that although many organizations use Scrum, the majority struggle to grasp both the purpose of Scrum as well as its benefits. Instead of increasing their organizational agility and delivering value to customers sooner, they achieve the opposite. We’ve come to call this Zombie Scrum; something that looks like Scrum from a distance, but you quickly notice that things are amiss when you move closer. There is no beating heart of valuable and working software, customers are not involved, and there is no drive to improve nor room for self-organization. One antidote we’ve found helpful is to rethink how teams interact, both within the team as well as with stakeholders and the broader organization. For this, we found help in Liberating Structures.
Enterprise backup software provides data protection foundation
The first consideration is data movement, which is the process a backup application uses to get data from primary storage to the backup storage platform. Early backup and recovery software ran on each server and simply wrote to a local tape drive or disk device. This method wasn't scalable and introduced considerable hardware and management costs. Vendors have evolved their products into network-based backup. These systems implement one or more centralized backup servers that pull data across the network from each source application server. Scalability is achieved by adding more backup servers and storage media, such as tape and disk drives. As long as sufficient network bandwidth is available, backups can scale to meet demand. Network backup systems have advanced over time to improve the efficiency of data movement. Some products read data directly from the storage platform through snapshots and replication. Other systems use data protection APIs available in hypervisor and hyper-converged infrastructure platforms.
Mental health issues concern UK tech professionals
The most significant reason behind the decline in mental wellbeing is an insufficient workforce. According to the study, tech teams are “stretched to breaking point” to make up for talent shortfalls, with several mentions of employees working more than 50 hours a week, which has a direct impact on stress levels. Employers that are very inflexible when it comes to working arrangements are three times more likely than highly flexible ones to have workers with mental health issues (31% versus 9%), according to the study. “No one would pretend that working in the tech sector is a walk in the park, but for it to be pushing more than half its workers into a state of mental health concern is a real issue for the sector,” said Albert Ellis, chief executive at Harvey Nash. “This is particularly true for those very small companies where a greater proportion of workers report that they are currently affected by stress,” he added. Companies are relatively supportive when it comes to mental health issues, with three-quarters (77%) having at least some kind of support in place.
Data science and ML for human well-being with Jina Suh
The mission of the HUE team is to really empower people by creating and inventing new technologies that promote emotional resilience and well-being. It’s really grounded in the fact that emotions are fundamental to human interactions and they influence everything that we do starting from learning, memory, decision making and all these other aspects of our lives. So, you know, how do we bring emotional intelligence to technology is kind of the core our research. ... As humans, we actually generate a lot of data about how we’re feeling or what we’re thinking, we have body language, we have the way that we speak, kind of the faces that we make. It’s really difficult to process all of that data all at once. So we need the help from computers and technology to not only capture all of that information, but also help us make sense of the data by analyzing the information. ... Computers have been ubiquitous in our lives and we expect more meaningful interactions with our technologies and we want our technologies to understand this in some sense.
The biggest risk to uptime? Your staff
The Uptime Institute has surveyed thousands of IT professionals throughout the year on outages and said the vast majority of data center failures are caused by human error, from 70 percent to 75 percent. And some of them are severe. It found more than 30 percent of IT service and data center operators experienced downtime that they called a “severe degradation of service” over the last year, with 10 percent of the 2019 respondents reporting that their most recent incident cost more than $1 million. ... "Perhaps there is simply a limit to what can be achieved in an industry that still relies heavily on people to perform many of the most basic and critical tasks and thus is subject to human error, which can never be completely eliminated," wrote Kevin Heslin, chief editor of the Uptime Institute Journal in a blog post. "However, a quick survey of the issues suggests that management failure — not human error — is the main reason that outages persist. By under-investing in training, failing to enforce policies, allowing procedures to grow outdated, and underestimating the importance of qualified staff, management sets the stage for a cascade of circumstances that leads to downtime," Heslin went on to say.
Pattern of the Month: Single Piece Flow
As you'd expect, pull ultimately starts with consumer demand for a product or service; however, to enable smooth flow, at each station where work is done, the number of items that can be handled at any one time must be subject to a Work In Progress (WIP) limit. Anything below the WIP limit implies a potential for accommodating more work. It is this "pull signal" which draws work on from the previous station in the value chain. The theoretical WIP limit for achieving optimum pull is exactly one. This is known as single piece flow (SPF) and it has the clear advantage of reducing lead time, depreciation of stock-on-hand, and the cost of delay on each item to the absolute minimum. SPF requires cross-functional team members, all of whom can swarm on a single work item to progress it. In fact, in such cases, it can be argued that a WIP limit greater than one must mean a push system. SPF can be very difficult to achieve and yet the potential rewards are indisputable. With only one item on hand at any one time, there will be no opportunity for work to pool in the team's engineering process and very little chance for technical debt and waste to accumulate.
Can Fintech Make the World More Inclusive?
In recent years, fintech companies have played an important role in complementing the formal banking sector and providing trade credit to small firms. Chinese mobile and online payment platform Alipay, for example, has since 2006 provided credit to vendors operating on the Chinese e- commerce platform, Alibaba. Alipay, and subsequently Ant Financial, developed an algorithm-based internal rating system taking data from vendors’ real-time transactions on commercial platforms, such as the Chinese online shopping website Taobao, to provide credit facilities. Three key features show how Alipay/Ant Financial credit to SMEs helps to alleviate credit market frictions in China. First, using the data and information Alibaba has on its 16 million merchants, Ant Financial reduces information asymmetry between itself and potential borrowers, allowing it to extend credit to firms that traditional banks will not help due to information scarcity.
Building Intelligent Conversational Interfaces
In Machine Reading Comprehension, or Question Answering, you are given a piece of text or context and a query, the goal is to identify the part of the text that answers the question. The combination of long-short term memory network and attention model is used for finding the answer in the context or piece of text. At a high level, you feed the context or passage of text through LSTM layers with word embedding and character embedding, also the query or questions, and you compute pairwise query to context and context to query attention, and again, apply bidirectional LSTM networks to get the start and end of the answer a piece of text. This is a very active area of research, in the last couple of years, there has been a lot of progress in machine reading comprehension. Dialog understanding or Dialog State Tracking is an active research area. Many times, users don’t give all the information needed to achieve a task in a single turn. The bot has to converse with the user and navigate the user to achieve the task (to track an order for example). Maintaining the "state" of the dialogue and extracting information across different set of messages is key to dialog understanding.
Integrating security with robotic process automation
Bot operators are employees responsible for launching RPA scripts and dealing with exceptions. Sometimes, in the rush to deploy RPA and see immediate results, enterprises will not distinguish between the bot operators and the bot identities. The bots are run using human operator credentials. This configuration makes it unclear when a bot conducted a scripted operation versus when a human operator took an action. It becomes impossible to univocally attribute actions, mistakes and, most importantly, attacks or fraudulent actions. The other issue that arises from re-using human operator credentials with bots is that administrators will tend to keep passcode complexity and frequency of rotation to a minimum. Administrators are limited to what is reasonable human user experience, rather than what a bot can handle. This eases brute force attacks and consequent data leakages. Instead, Gartner recommends assigning a unique identity to each RPA bot.
Quote for the day:
"Trust is the lubrication that makes it possible for organizations to work." -- Warren G. Bennis
No comments:
Post a Comment