Daily Tech Digest - March 09, 2021

We don’t need to go back to the office to be creative, we need AI

The solution to this dilemma will come from artificial intelligence (AI). The inherent trade-off between exploration and efficiency is well known to AI researchers. One question that those working in AI often have to grapple with is how often an algorithm should take actions that it hasn’t tried, as against actions it has already tried that will usually lead to some reward. Untried actions can yield spectacular results. For example, when the DeepMind computer program AlphaGo beat Go world champion Lee Sedol in 2016, it did so by exploring moves most human players had never seen before. Prior to move 37 in the second match against Sedol, AlphaGo had calculated that there was a one-in-ten-thousand chance that a human player would make that same move. And the adventurous gamble paid off. Human innovation involves a similar process of exploration and, to facilitate innovation, companies must get their employees to “collide”. Before the pandemic, this was achieved through open-plan architecture that encouraged “water-cooler” moments of unplanned encounters. But, with many employees working from home, corporations will have to find different ways to facilitate these kinds of random interactions.


Why Data Monitoring is Critical in a Hybrid Cloud Environment

Effective capacity management is not the only benefit of data monitoring, however. While cloud providers typically have strong security protocols, agencies and organizations must remain vigilant for cyberattacks. Data monitoring software provides an effective means to spot problems and mitigate issues before they can affect or damage the network and limit operations. When it comes to data, agencies and organizations must be able to “properly protect their data whether it is on-prem, in the cloud, or in transit,” Grunewald explained. While the path to the cloud has become more clear, Grunewald cautions that there must be an ordered approach to migrating software. That process can be roughly divided into six steps: 1) assessment, 2) prioritization, 3) roadmap, 4) optimization, 5) build, and 6) migration. From start to finish, the entire process is designed to encourage frank conversations about what material and processes are worth the transition and how to best utilize the available resources. There are incredible gains to be made by transitioning to the cloud.


Attack on Exchange Servers Gives Impetus to Move Email to the Cloud

Moving Exchange to the cloud began in 2005 but only became mainstream after the release of Office 365 in 2011. I spoke about the perils of moving to Exchange Online at the Exchange Conferences of 2012 and 2014. On-premises servers were still attractive in 2014 but the situation is very different now, both in terms of the threat to on-premises servers and our knowledge of what it’s like for companies to run email in the cloud. According to data shared at the TEC 2020 conference, Exchange Online supports 5.5 billion mailboxes. That number seems enormous in the context of 250-odd monthly active Office 365 users, but more reasonable when you consider that the figure includes Outlook.com users (400 million switched over to use the Exchange Online infrastructure in 2017), shared mailboxes, group mailboxes, resource mailboxes, and a very large number of system mailboxes used by the Microsoft 365 substrate. Exchange Online is a massive online service running on 275,000 mailbox servers. The attack penetrated none of these servers.


Infrastructure as code: Create and configure infrastructure elements in seconds

The final part of the IaC topology, is the one that will be most visible and familiar to IT Ops teams. These are the container orchestrators that control the way in which containers are deployed and provisioned. The most widespread of these orchestrators is undoubtedly Kubernetes. However, the ubiquity of Kubernetes has created something of a problem—that most IT Ops staff, and indeed many developers, think that IaC and Kubernetes are synonymous. I don’t mean this as a criticism of Kubernetes. The system is perhaps the purest expression of the IaC paradigm: eminently portable, but also capable of being adapted to run efficiently on a wide variety of hardware. However, Kubernetes is far from representing the full range of what can be achieved by a careful mix of containers, VMs, and a creative use of container orchestrators. In other words—Kubernetes is the start of your IaC journey, not the end. While it’s a great place to begin to explore adaptive provisioning and continuous integration, many firms will need to develop bespoke containerization strategies in order to work with obscure systems: those that interact with legacy hardware


Are Passwords Becoming A Thing Of The Past?

Compounding the problem of a weak password is the tendency of many users to use the same password across multiple sites and accounts, ranging from their social media to their financial accounts. One study revealed that an average of just five passwords is used across multiple services by more than half of the respondents. One password getting compromised could spell disaster for a user, and even more so if they possess sensitive information about their company. Many users may already have their passwords out in the open – the website Have I Been Pwned has a database of over 613 million passwords that data breaches have exposed. Passwords acquired through these breaches are usually sold en masse to other hackers that use automated attacks like credential stuffing to find a password match to an account. One of the more common ways to strengthen authentication is to use a password manager, especially those that generate unique passwords and automatically change passwords every few months. Changing passwords frequently greatly reduces the risk of user data being affected in the event of a password breach. Two-factor authentication is another common method of going semi, if not fully, passwordless.


No More Wasted Data: Why More Companies Are Turning Data Into Action

Though the majority of data collected by businesses currently goes to waste, there are more tools emerging to help companies unify consumed data, automate insights, and apply machine learning to better leverage data to meet business goals. First, it's important to take a step back to evaluate the purpose and end goals here. Collecting data for the sake of having it won't get anyone very far. Companies need to identify the issues or opportunities associated with the data collection. In other words, they need to know what they're going to do with every single piece of data collected. To determine the end goals, start by analyzing and accessing different types of data collected to determine if it was beneficial to the desired outcome or has the potential to be but wasn't leveraged. This will help identify any holes where other data should be tracked. This will also help hone the focus on the more important data sets to integrate and normalize, ultimately making data analysis a more painless process that produces more usable information. Next, make sure the data is useful -- that it's standardized, integrated across as few tech platforms as possible, and that the collection of specific data follows company rules and industry regulations.


Akash Network Launches Akash MAINNET 2, the First Decentralized Open-Source Cloud

As the first open-source cloud and the only viable decentralized cloud alternative to centralized cloud providers like Amazon Web Services, Google Cloud, and Microsoft Azure, Akash MAINNET 2 empowers developers to break free from the limitations of traditional cloud infrastructure. The platform accelerates growth and scale in the blockchain ecosystem by enabling developers and companies to decentralize their cloud infrastructure, deploying applications faster, more efficiently, and at lower cost. Through the platform, individuals, companies, and data centers with underutilized computing capacity will also be able to monetize and lease their cloud compute to those who need it, recouping the high costs of server maintenance and capital expenditure. Recently, Akash announced an integration with Equinix Metal, the world's largest data center and colocation infrastructure provider with 220 data centers in 25 countries, to expand access to global, low-latency, and powerful cloud infrastructure. For the first time, developers will be able to launch applications such as DeFi apps, blogs, games, data visualizations, block explorers, blockchain nodes, and other blockchain network components on a decentralized cloud.


Number of ransomware attacks grew by more than 150%

On a technical level, public-facing RDP servers were the most common target for many ransomware gangs last year. Against the backdrop of the pandemic that caused many people to work from home, the number of such servers grew exponentially. In 52% of all attacks, analyzed by Group-IB, publicly accessible RDP servers were used to gain initial access, followed by phishing (29%), and exploitation of public-facing applications (17%). Big Game Hunting – targeted ransomware attacks against wealthy enterprises – continued to be one of the defining trends in 2020. In hope to secure the biggest ransom possible, the adversaries were going after large companies. Big businesses cannot afford downtime, averaging 18 days in 2020. The operators were less concerned about the industry and more focused on scale. It’s no surprise that most of the ransomware attacks, that Group-IB analyzed, occurred in North America and Europe, where most of the Fortune 500 firms are located, followed by Latin America and the Asia-Pacific respectively. A chance of easy money prompted many gangs to join the Big Game Hunting. State-sponsored threat actors who were seen carrying out financially motivated attacks were not long in coming.


NHS datacentre transformation projects continue apace during pandemic

Ratcliffe suggests a shift away from a top-down approach across NHS IT, recognising the beneficial modularity that enables individual working parts such as hospitals or trusts to become more nimble – a whole that is greater than the sum of its parts. He says NHS Digital, the NHS’s digital arm, has partly succeeded in moving away from a more traditional “command and control” approach, shifting emphasis more to the empowerment of clinicians and the individual healthcare bodies for which they work. “It is amazing what a crisis will do,” says Ratcliffe, pointing to the fast-moving, NHS-driven, coronavirus vaccination roll-out across England. “There weren’t really rules before, but habits that became rules by repetition. I genuinely don’t think we will all suddenly go, ‘oh wow, we’ve got to get back to normal’ either – ‘normal’ doesn’t exist any more.” The NHS has been able to move forward – for example, in its Covid-19 vaccine roll-out and, to a lesser degree, in contact tracing – in ways that NHS Digital chief Sarah Wilkinson admitted, in an IBM presentation, “no one would have thought possible”. Ratcliffe says more productive provision might involve slicing resources by disease or by discipline, horizontally or vertically, instead of NHS-wide or even trust-wide.


Look to Banking as a Model for Stopping Crime-as-a-Service

We have seen good examples of how cybersecurity teams are working more closely with other internal parties, especially in the banking sector. Some of the major UK and European banks have been operating with an organizational structure where financial crime and cybersecurity teams have been part of the same business unit for over 10 years, driven by the natural synergy between these functions. This has created significant progress. With the convergence of cyber and financial crime teams, the industry has seen the emergence of the fusion center which can be thought of as an advanced version of the security operations center (SOC) management model, unifying several different teams within an organization, such as fraud, financial crime, and cyber. By bringing together these units, organizations can increase situational awareness, share analytics and threat intelligence more easily, have increased attractiveness to talent, and have a standard framework for procedures. Combating cybercrime and disrupting the illegal economy can then be done to a more effective degree by having more transparent management, establishing an end-to-end operating model, and allowing easier collaboration and consolidation on relevant threats and actions.



Quote for the day:

"Every leader needs to look back once in awhile to make sure he has followers." -- Kouzes and Posner

Daily Tech Digest - March 08, 2021

Women in tech: What's the landscape look like in the UK?

Computer science-related degrees have seen a relatively low percentage of female graduates, with the percentage fluctuating between 15% and 16% from 2015 to 2019. Those low participation levels in technical courses are reflected in the small percentage of women holding tech roles in the UK. Bev White, chief executive of the Harvey Nash Group, believes that in the wake of the COVID-19 outbreak and Brexit, the UK tech landscape will significantly change. “Digital will be at the heart of the revolution,” she said. “Tech will be more important than ever and alongside this, societal appreciation of the importance of inclusion is growing all the time. I’m confident that we will see real progress in terms of women’s participation in the tech sector. We are on an irresistible path to a different kind of society and tech must reflect this.” While the coronavirus pandemic has put technology and the digital workplace front and centre in many jobs, a recent whitepaper published by STEM Women found that 60% of female STEM students have had their future career prospects affected by the pandemic.


Cybersecurity in 2021: Stopping the madness

Cybersecurity wins the award for Most Dismal Science. But if suffering attacks now amounts to a cost of doing business, then the time-honored approach of prioritizing risk and limiting damage when breaches occur still offers reason for hope. This collection of articles from CSO, Computerworld, CIO, InfoWorld, and Network World delivers specific guidance on best security practices across the enterprise, from the C-suite to developer laptops. Writing for CSO, contributor Stacey Collette addresses the age-old question of how to focus upper management's attention on security in “4 ways to keep the cybersecurity conversation going after the crisis has passed.” The thesis is that five-alarm debacles like the SolarWinds attack can serve as useful wakeup calls. Collette suggests seizing the moment to convince the board to match the company business model with an appropriate risk mitigation framework – and to use information sharing and analysis centers to exchange information on industry-specific threats and defensive measures. CIO’s contribution, “Mitigating the hidden risks of digital transformation” by Bob Violino, surfaces a problem hiding in plain sight: Digital innovation almost always increases risk.


Public Sector Technology And Digital Transformation Predictions 2021

The public sector is a unique arena, worthy of separate consideration. It is, in fact, several unique arenas, with their own agendas around using technology to ensure mission success. “Trying to cover the entire public sector perspective is difficult,” notes Juliana Vida, Splunk’s chief technical advisor for the public sector and a former deputy CIO for the U.S. Navy. “As across any industry, there’s such a huge range of maturity levels, of leadership buy-in and prioritization around technology.” But we do have one blanket prediction for public agencies: 2021 will be a new era for data in government, says Frank Dimina, Splunk’s vice president for the public sector. Between the disorder and rapid transformation of the pandemic and a fractious U.S. election year, Dimina predicts continued and heightened interest in transformative digital technologies and the power of data to improve government services and citizen experience. “Government is moving with an agility we’ve never seen in peacetime,” Dimina says. “Just one example: The U.S. Department of Defense moved a million people to remote work in a month.


Get to know 4 microservices versioning techniques

Versioning a microservices-based application is not as straightforward as with a traditional app. Microservices enable developers to design, build, test, deploy and update services independent of one another -- or simultaneously, if they choose to do so. While this is great for architectural flexibility, it also means that versioning one service can cause it to lose compatibility with another if updates are not properly planned or synchronized. As such, architects must implement the right methods and tools to version services consistently and allow quick rollbacks when needed. ... In contrast to URI versioning, the main benefit of this technique is that the names and locations of application resources remain the same across updates. This helps make sure the URI doesn't become cluttered with versioning information and that API names retain their semantic meaning to the developers who regularly work with them. One downside of this approach is that information cannot be readily encoded into the hypermedia links. ... Semantic versioning is an ideal practice for projects containing public-facing or communally shared APIs that experience frequent updates. This technique uses three non-negative integer values to identify version types: major, minor and patch.


Why data and machine intelligence will become the new normal in insurance

In the next 3-5 years, the digital insurance consumer will likely remain the millennials, with higher levels of income and education. It is important though to not assume homogeneity and develop solutions based on lazily assessed group characteristics. Personalisation is more important now than it ever has been. Beyond functionality and ease of access, emotions and personal growth are key drivers in consumption behaviour and like in any other group, there are a diverse set of expectations and desires amongst this group. Tailoring services and online buying journeys to the individual rather than the group is paramount; in the same way that offering life insurance immediately following a bereavement could be viewed as inappropriate, so too an offer of a social insurance be offensive to a staunch individualist. Certain benefits, although appealing on the surface to members of 'the group' may not work at a more nuanced level – a donation with every policy bought to an environmental charity will not appeal to every millennial. Hopefully, in this personalised version of the future, the benefits will be sufficiently tailored that people will see what is most relevant to them, not to whichever 'group' they may belong to.


Why adding neurodiversity to the cybersecurity world is a win-win for companies and employees

"Often when management and executives hear 'disability accommodations,' they're thinking of physical or medical things to do," she said "They're not thinking about the fact that autistic people just think differently, and most of us neurodivergent professionals function fine." Roddie founded the Actually Austistic Task Force when she joined IBM. This group provides a closed communication channel for anyone who identifies as autistic and meets weekly through WebEx. She said companies should use these employee resource groups to guide initiatives and collect feedback, as the Neurodiversity at IBM team does. "A lot of organizations mess up where they create a neurodiversity program in the workplace, and they don't actually talk to any neurodiverse people and mostly go based on stereotypes or assumptions," she said. "Having conversations with and including neurodivergent people in the development phase of social initiatives is important." Roddie said companies should set more intentional outreach goals to bring more neurodiverse individuals into the workforce.


India's Bitcoin Exchanges Breathe Sigh of Relief As Gov Reconsiders Crypto Ban

Sidharth Sogani, CEO of the Indian crypto intelligence firm CREBACO, told Decrypt that it’s the industry’s persistent dialogue with the government that has prevented a blanket ban. Sogani, who also heads the industry pressure group, the Association for Blockchain, Crypto, and Digital Asset Entrepreneurs (ABCE), knew there wasn’t going to be an all-out restriction as the government asked “very inquisitive questions”, signaling that careful consideration is being paid with a regulatory framework in mind. As mainstream institutions flock to the crypto space, the Indian government has seen crypto as an unmissable global trend, according to Sohail Merchant, CEO of the Indian crypto exchange PocketBits. PocketBits is one of the members of the Blockchain and Cryptocurrency Committee (BACC), part of the wider tech industry association Internet and Mobile Association of India (IAMAI). BACC has been campaigning for a regulatory framework instead of an outright ban. “The Indian government realizes that crypto is being adopted globally by corporations like PayPal, Tesla and top-tier banks,” Merchant told Decrypt. 


Developers vs. Security: Who is Responsible for Application Security?

Kunchala says secure code is more critical today than ever before due to remote working. Kunchala, who formerly led Deloitte’s application security practice, explained that knowing the risks of a pure remote workforce is critical for a good cyber hygiene interface. “Your attack surface is much larger now. It’s not within the four walls of your enterprise,” he says. “Application security should be a top priority. The biggest attack vector is the application layer.” This focus on application security is nothing new, however. Because most attacks occur at the application layer, Kunchala says, the focus on building secure code has been happening for the last decade. And with that focus, of course, the push and pull between developers and security experts has increased. In dealing with multiple clients across a diverse set of industries, Kunchala suggested that those groups in which defense is built into the culture have less friction for application security. Of course, it’s highly dependent on the industry. For some, like health care and finance, security may be more at the forefront. For others, it may not be as critical. “It’s a mixed bag, and a lot of that is defined by how the company perceives security, and how important security is for their products and solutions,” he says.


'The future is coming': At Dubai airport, travellers' eyes become their passports

Dubai’s airport, the world’s busiest for international travel, can already feel surreal, with its cavernous duty-free stores, artificial palm trees, gleaming terminals, water cascades and near-Arctic levels of air conditioning. Now, the key east-west transit hub is rolling out another addition from the realm of science fiction – an iris-scanner that verifies one’s identity and eliminates the need for any human interaction when entering or leaving the country. It’s the latest artificial intelligence programme the United Arab Emirates has launched amid the surging coronavirus pandemic, contact-less technology the government promotes as helping to stem the spread of the virus. But the efforts also have renewed questions about mass surveillance in the federation of seven sheikdoms, which experts believe has among the highest per-capita concentrations of surveillance cameras in the world. Dubai's airport started offering the program to all passengers last month. On Sunday, travellers stepped up to an iris scanner after checking in, gave it a good look and breezed through passport control within seconds. Gone were the days of paper tickets or unwieldy phone apps.


Regtech disruption trends: how digitisation is enhancing regulatory processes

One area of digital innovation that’s been prominent in anti-money laundering (AML) and Know Your Customer (KYC) is biometric security. Eradicating the need to remember log-in credentials, this allows users to verify their identity using fingerprints, selfies or other biological characteristics. Biometric security also includes detection of suspicious behaviour that deviates from how a user normally uses a platform, such as a different IP address. Banks, for example, have been starting to provide the option of fingerprint or facial recognition-powered access to accounts. But today’s solutions should look out for false face IDs, which have also risen and aim to fool the system. “Our research indicates that fake face IDs are available on the dark web for as little as €50 and some of them are so realistic, they can often fool human passport agents,” said Roger Tyrzyk, director, global gaming and sales for UK&I at IDnow. “The most commonly faked documents are national ID cards followed by passports. Other documents include residence permits and driving licences. The quality of these fake IDs is increasing too.



Quote for the day:

"People seldom improve when they have no other model but themselves." -- Oliver Goldsmith

Daily Tech Digest - March 07, 2021

AI projects yield little business value so far

David Semach, a partner and head of AI and automation at Infosys Consulting for Europe, the Middle East and Africa (EMEA), agrees with the researchers that satisfaction with the technology in a financial sense is often quite low, partly because organisations “are mostly still experimenting” with it. This means it tends to be deployed in pockets rather than widely across the business. “The investment required in AI is significant, but if it’s just done in silos, you don’t gain economies of scale, you can’t take advantage of synergies and you don’t realise the cost benefits, which means it becomes a cost-prohibitive business model in many instances,” says Semach. Another key issue here is the fact that most companies “mistakenly” concentrate on using the software to boost the efficiency of internal processes and operating procedures, rather than for generating new revenue streams. “Where companies struggle is if they focus on process efficiencies and the bottom line because of the level of investment required,” says Semach. “But those that focus on leveraging AI to create new business and top-line growth are starting to see longer-term benefits.” 


How Optimizing MLOps can Revolutionize Enterprise AI

With database-deployment, it only takes one line of code to deploy a model. The database-deployment system automatically generates a table and trigger that embody the model execution environment. No more messing with containers. All a data scientist has to do is enter records of features into the system-generated predictions database table to do inference on these features. The system will automatically execute a trigger that runs the model on the new records. This saves time for future retraining too, since the prediction table holds all the new examples to add to the training set. This enables predictions to stay continuously up-to-date, easily--with little to no manual code. ... The other major bottleneck in the ML pipeline happens during the data transformation process: manually transforming data into features and serving those features to the ML model is time-intensive and monotonous work. A Feature Store is a shareable repository of features made to automate the input, tracking, and governance of data into machine learning models. Feature stores compute and store features, enabling them to be registered, discovered, used, and shared across a company.


How to get started in quantum computing

Unlike binary bits, qubits can exist as a ‘superposition’ of both 1 and 0, resolving one way or the other only when measured. Quantum computing also exploits properties such as entanglement, in which changing the state of one qubit also changes the state of another, even at a distance. Those properties empower quantum computers to solve certain classes of problem more quickly than classical computers. Chemists could, for instance, use quantum computers to speed up the identification of new catalysts through modelling. Yet that prospect remains a distant one. Even the fastest quantum computers today have no more than 100 qubits, and are plagued by random errors. In 2019, Google demonstrated that its 54-qubit quantum computer could solve in minutes a problem that would take a classical machine 10,000 years. But this ‘quantum advantage’ applied only to an extremely narrow situation. Peter Selinger, a mathematician and quantum-computing specialist at Dalhousie University in Halifax, Canada, estimates that computers will need several thousand qubits before they can usefully model chemical systems.


Non-Invasive Data Governance Q&A

Data Governance can be non-invasive if people are recognized into the role of data steward based on their existing relationship to the data. People define, produce and use data as part of their everyday jobs. People automatically are stewards if they are held formally accountable for how they define, produce and use data. The main premise of being non-invasive is that the organization is already governing data (in some form) and the issue is that they are doing it informally, leading to inefficiency and ineffectiveness in how the data is being governed. For example, people who use sensitive data are already expected to protect that data. The NIDG approach assure that these people know how data is classified, and that people follow the appropriate handling procedures for the entire data lifecycle. You are already governing but you can do it a lot better. We are not going to overwhelm you with new responsibilities that you should already have. ... The easiest answer to that question is that almost everybody looks at governance like they look at government. People think that data governance has to be difficult, complex and bureaucratic, when the truth is that it does NOT have to be that way. People are already governing and being governed within organizations, but it is being done informally. 


Apache Kafka in a Smart City Architecture

A smart city functions alongside various interfaces, data structures, and technologies. Many high-volume data streams must be integrated, correlated, and processed in real-time. Scalability and elastic infrastructure are essential for success. Many data streams contain mission-critical workloads and must offer reliability, zero data loss, and persistence. An event streaming platform based on the Apache Kafka ecosystem provides all these capabilities. ... A smart city requires more than real-time data integration and real-time messaging. Many use cases are only possible if the data is also processed continuously in real-time. That's where Kafka-native stream processing frameworks such as Kafka Streams and ksqlDB come into play. ... The public sector and smart city architectures leverage event streaming for various use cases. The reasons are the same as in all other industries: Kafka provides an open, scalable, elastic infrastructure. Additionally, it is battle-tested and runs in every infrastructure (edge, data center, cloud, bare metal, containers, Kubernetes, fully-managed SaaS such as Confluent Cloud). But event streaming is not the silver bullet for every problem. Therefore, Kafka complements other technologies such as MQTT for edge integration, or a cloud data lake for batch analytics.


Remote work: 4 ways to spot a toxic culture

Trust is the fundamental element of a high-performing culture. Especially in a remote workplace, it’s difficult to be a lone wolf and not collaborate on projects. If you notice that your team is avoiding working with someone, look to see if it’s a pattern. Perhaps that individual is “phoning it in” or making too many mistakes, and the team can’t trust their work anymore. You need to address this right away to avoid disappointing the rest of the team. Ask yourself: How much do folks enjoy redoing someone else’s work? Or watching the employee screw up and get away with it? Or questioning why they are working so hard while others aren’t? Worse, your team members may start wondering if they can trust you as a manager if you won’t handle the problem. ... When you hear someone make a statement that may be judgmental, ask the person, “What do you know to be true? What are the facts?” A good way to tell if someone is stating facts or judgments is to apply the “videotape test:” Can what they describe be captured by a video camera? For example, “He was late for the meeting” is a fact and passes the test. In contrast, “He’s lazy” is a judgment and doesn’t pass the test. Be mindful when you’re hearing judgments and try to dig out the facts.


Changing the AI Landscape Through Professionalization

Organizations should work with Data Architect, Business owners and Solution architect to develop their AI strategy underpinned by Data strategy, Data Taxonomy and analyzing the value that their company can and wish to create. For “Establishing a Data Driven culture is the key—and often the biggest challenge—to scaling artificial intelligence across your organization.” While your technology enables business, your workforce is the essential driving force. It is crucial to democratize data and AI literacy by encouraging skilling, upskilling, and reskilling. Resources in the organization would need to change their mindset from experience-based, leadership driven decision making to data-driven decision making, where employees augment their intuition and judgement with AI algorithms’ recommendations to arrive at best answers than either humans or machines could reach on their own. My recommendation would be to carve out “System of Knowledge & Learning” as a separate stream in overall Enterprise Architecture, along with System of Records, Systems of Engagement & Experiences, Systems of Innovation & Insight. AI and data literacy will help in increasing employee satisfaction because the organization is allowing its workforce to identify new areas for professional development.


How Skyworks embraced self-service IT

At Skyworks, the democratization of IT is all about giving our business users access to technology—application development, analytics, and automation—with the IT organization providing oversight, but not delivery. IT provides oversight in the form of security standards and release and change management strategies, which gives our business users both the freedom to improve their own productivity and the assurance that they are not reinventing the automation wheel across multiple sites. COVID has been a real catalyst of this new operating model. As in most companies, when COVID hit, we started to see a flurry of requests for new automation and better analytics in supply chain and demand management. Luckily for us, we had already started to put the foundation in place for our data organization, so we were able capitalize on this opportunity to move into self-service. ... For IT to shift from being order-takers to enablers of a self-service culture, we created a new role: the IT business partner. We have an IT business partner for every function; these people participate in all of the meetings of their dedicated function, and rather than asking “What new tool do you need?”, they ask, “What is the problem you are trying to solve?” IT used to sit at the execution table; with our new IT business partners, we now sit at the ideation table.


12 Service Level Agreement (SLA) best practices for IT leaders

Smart IT leaders understand that negotiation is not concession. It’s critical to reach a mutually agreed pathway to providing the service that the client expects, says Vamsi Alla, CTO at Think Power Solutions. In particular, IT leaders should work with providers on penalties and opt-out clauses. “A good SLA has provisions for mutually agreed-upon financial and non-financial penalties when service agreements are not met,” Alla says. Without that, an SLA is worth little more than the paper on which it’s written. ... “The most common mistake is to include performance metrics that are properly reviewed and unattainable,” Alla says. “This is usually done because the client has asked for it and the service provider is too willing to oblige. This may cause the contract to come through, but the road to execution becomes bumpy as the metrics can’t be achieved.” The level of service requested directly impacts the price of the service. “It’s important to understand what a reasonable level of performance is in the market so as not to over-spec expectations,” Fong of Everest Group says.


Security starts with architecture

Security must be viewed as an organizational value that exists in all aspects of its operation and in every part of the product development life cycle. That includes planning, design, development, testing and quality assurance, build management, release cycles, the delivery and deployment process, and ongoing maintenance. The new approach has to be both strategic and tactical. In strategic terms, every potential area of vulnerability has to be conceptually addressed through holistic architectural design. During the design process, tactical measures have to be implemented in each layer of the technology ecosystem (applications, data, infrastructure, data transfer, and information exchange). Ultimately, the responsibility will fall in the hands of the development and DevOps teams to build secure systems and fix security problems. The strategic and tactical approach outlined will allow them to handle security successfully. Security policies must be applied into the product development life cycle right where code is being written, data systems are being developed, and infrastructure is being set up.



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose

Daily Tech Digest - March 06, 2021

From Agile to DevOps to DevSecOps: The Next Evolution

While Agile and DevOps share common goals, they have not always agreed on how to achieve those goals. DevOps differs in many respects from Agile, but, at its best, DevOps applies Agile methodologies, along with lean manufacturing principles, to speed up software deployment. One area of particular tension between Agile and DevOps is that the latter relies heavily on tools; in particular, when it comes to the automation of testing and deployment processes. But DevOps can overcome the resistance of Agile developers to tool usage simply by applying Agile principles themselves. Effectively, DevOps proponents must convince Agile teams that dogmatic adherence to the underlying principles of Agile is actually inconsistent with Agile in the first place. Ironically, Agile developers who insist that the process is always bad actually violate Agile principles by refusing to acknowledge the benefits offered through change, another Agile principle. The challenge is to have the Agile development teams trust in the automation efforts of DevOps, while at the same time encouraging the DevOps team to consider the business goals of deployment rather than pursuing speed of deployment for its own sake.


Geoff Hinton’s Latest Idea Has The Potential To Break New Ground In Computer Vision

According to Dr. Hinton, the obvious way to represent the part-whole hierarchy is by combining dynamically created graphs with neural network learning techniques. But, if the whole computer is a neural network, he explained, it is unclear how to represent part-whole hierarchies that are different for every image, if we want the structure of the neural net to be identical for all images. Capsule networks introduced by Dr.Hinton a couple of years ago offer a solution: A group of neurons, called a capsule, is set aside for each possible type of object or part in each region of the image. However, the problem with capsules is they use a mixture to model the set of possible parts. The computer will have a hard time answering questions like “Is the headlight different from the tyres and more such questions” (more on this example in the next section). The recent work on Neural Field offers a simple way to represent values of depth or intensity of the image. It uses a neural network that takes as input a code vector representing the image along with image location and outputs the predicted value at that location.


Addressing Security Throughout the Infrastructure DevOps Lifecycle

Keep in mind that developer-first security doesn’t preclude “traditional” cloud security methods — namely monitoring running cloud resources for security and compliance misconfigurations. First of all, unless you’ve achieved 100% parity between IaC and the cloud (unlikely), runtime scanning is essential for complete coverage. You probably still have teams or parts of your environment — maybe legacy resources — that are still being manually provisioned via legacy systems or directly in your console and so need to be continuously monitored. Even if you are mostly covered by IaC, humans make mistakes and SREmergencies are bound to happen. We recently wrote about the importance of cloud drift detection to catch manual changes that result in unintentional deltas between code configuration and running cloud resources. Insight into those resources in production is essential to determine those potentially risky gaps. Runtime scanning also has some advantages. Because it follows the actual states of configurations, it’s the only viable way of evaluating configuration changes over time when managing configuration in multiple methods. Relying solely on build-time findings without attributing them to actual configuration states in runtime could result in configuration clashes.


Privacy breaches in digital transactions: Examination under Competition Law or Data Protection Law?

As long as the search engines’ search data is kept secret, no rival or would be rival or entrant, will have access to this critical ‘raw material’ for search innovation. Further, when transactions take place in the digital economy, firms generally tend to collect personal as well as non-personal data of users in exchange for services provided. While it can be argued that personal data is probably collected with the user’s consent, usually, collection of non-personal data happens without the consent or knowledge of the consumers. Data is further compromised when businesses that have large amounts of data merge or amalgamate, and when dominant firms abuse their market position and resort to unethical practices. Traditional Competition Law analysis involves a wide focus on ‘pricing models’ i.e., methods used by business players to determine the price of their goods or services. User data forms part of the ‘non-pricing model’. With the Competition Act, 2002 undergoing a number of changes owing to technological developments, there is a possibility that non-pricing models are also considered under the ambit of the Act.


GraphQL: Making Sense of Enterprise Microservices for the UI

GraphQL has become an important tool for enterprises looking for a way to expose services via connected data graphs. These graph-oriented ways of thinking offer new advantages to partners and customers looking to consume data in a standardized way. Apart from the external consumption benefits, using GraphQL at Adobe has offered our UI engineering teams a way to grapple with the challenges related to the increasingly complicated world of distributed systems. Adobe Experience Platform itself offers dozens of microservices to its customers, and our engineering teams also rely on a fleet of internal microservices for things like secret management, authentication, and authorization. Breaking services into smaller components in a service-oriented architecture brings a lot of benefits to our teams. Some drawbacks need to be mitigated to deploy the advantages. More layers mean more complexity. More services mean more communication. GraphQL has been a key component for the Adobe Experience Platform user experience engineering team: one that allows us to embrace the advantages of SOA and helping us to navigate the complexities of microservice architecture.


Serverless Functions for Microservices? Probably Yes, but Stay Flexible to Change

When serverless functions are idle they cost nothing (“Pay per use” model). If a serverless function is called by 10 clients at the same time, 10 instances of it are spun up almost immediately (at least in most cases). The entire provision of infrastructure, its management, high availability (at least up to a certain level) and scaling (from 0 to the limits defined by the client) are provided out of the box by teams of specialists working behind the scenes. Serverless functions provide elasticity on steroids and allow you to focus on what is differentiating for your business. ... A “new service” needs to go out fast to the market, with the lowest possible upfront investment, and needs to be a “good service” since the start. When we want to launch a new service, a FaaS model is likely the best choice. Serverless functions can be set up fast and minimise the work for infrastructure. Their “pay per use” model means no upfront investment. Their scaling capabilities provide good consistent response time under different load conditions. If, after some time, the load becomes more stable and predictable, then the story can change, and a more traditional model based on dedicated resources, whether Kubernetes clusters or VMs, can become more convenient than FaaS.


Unconventional Sentiment Analysis: BERT vs. Catboost

Sentiment analysis is a Natural Language Processing (NLP) technique used to determine if data is positive, negative, or neutral. Sentiment analysis is fundamental, as it helps to understand the emotional tones within language. This, in turn, helps to automatically sort the opinions behind reviews, social media discussions, etc., allowing you to make faster, more accurate decisions. Although sentiment analysis has become extremely popular in recent times, work on it has been progressing since the early 2000s. Traditional machine learning methods such as Naive Bayesian, Logistic Regression, and Support Vector Machines (SVMs) are widely used for large-scale sentiment analysis because they scale well. Deep learning (DL) techniques have now been proven to provide better accuracy for various NLP tasks, including sentiment analysis; however, they tend to be slower and more expensive to learn and use. ... CatBoost is a high-performance, open-source library for gradient boosting on decision trees. From release 0.19.1, it supports text features for classification on GPU out-of-the-box. The main advantage is that CatBoost can include categorical functions and text functions in your data without additional preprocessing.


How to gain advanced cyber resilience and recovery across the IT ops and SecOps divide

The legacy IT world was all about protecting what they know about, and it’s hard to change. The new world is all about automation, right? It impacts everything we want to do and everything that we can do. Why wouldn’t we try to make our jobs as simple and easy as possible? When I first got into IT, one of my friends told me that the easiest thing you can do is script everything that you possibly can, just to make your life simpler. Nowadays, with the way digital workflows are going, it’s not just automating the simple things -- now we’re able to easily to automate the complex ones, too. We’re making it so anybody can jump in and get this automation going as quickly as possible. ... The security challenge has changed dramatically. What’s the impact of Internet of things (IoT) and edge computing? We’ve essentially created a much larger attack surface area, right? What’s changed in a very positive way is that this expanded surface has driven automation and the capability to not only secure workflows but to collaborate on those workflows. We have to have the capability to quickly detect, respond, and remediate.


Compute to data: using blockchain to decentralize data science and AI with the Ocean Protocol

If algorithms run where the data is, then this means how fast they will run depends on the resources available at the host. So the time needed to train algorithms that way may be longer compared to the centralized scenario, factoring in the overhead of communications and crypto. In a typical scenario, compute needs move from client side to data host side, said McConaghy: "Compute needs don't get higher or lower, they simply get moved. Ocean Compute-to-Data supports Kubernetes, which allows massive scale-up of compute if needed. There's no degradation of compute efficiency if it's on the host data side. There's a bonus: the bandwidth cost is lower, since only the final model has to be sent over the wire, rather than the whole dataset. There's another flow where Ocean Compute-to-Data is used to compute anonymized data. For example using Differential Privacy, or Decoupled Hashing. Then that anonymized data would be passed to the client side for model building there. In this case most of the compute is client-side, and bandwidth usage is higher because the (anonymized) dataset is sent over the wire. Ocean Compute-to-Data is flexible enough to accommodate all these scenarios".


Exchange Server Attacks Spread After Disclosure of Flaws

Beyond the U.S. federal government, the impact of the vulnerabilities continues to grow - and not just among the targeted sectors named by Microsoft. The company says those groups include infectious disease researchers, law firms, higher education institutions, defense contractors, policy think tanks and nongovernment organizations. Volexity, which contributed research for the vulnerability findings, first noticed exploitation activity against its customers around Jan. 6. That activity has suddenly ticked up now that the vulnerabilities are public, says Adair, CEO and founder of the firm. "The exploit already looks like it has spread to multiple Chinese APT groups who have become rather aggressive and noisy - quite a marked change from how it started with what we were seeing," he says. Threat detection company Huntress says it has seen compromises of unpatched Exchange servers in small hotels, one ice cream company, a kitchen appliance manufacturer and what it terms "multiple senior citizen communities." "We have also witnessed many city and county government victims, healthcare providers, banks/financial institutions and several residential electricity providers," writes John Hammond, a senior threat researcher at Huntress.



Quote for the day:

"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell

Daily Tech Digest - March 05, 2021

Do Disaster Recovery Sites Actually Work?

Having the right resources available during a crisis is crucial. While internal IT teams perform disaster drills and claim they are able to run operation from the remote site, the fact is that new discoveries come to the fore when a disaster strikes and top management team is usually oblivious to these issues. These so-called drills are often just a farce put on for external and internal auditors, who seldom get to the core issues that may have occurred during the drill. Most auditor reports are checklists and they are happy to tick them off and present a rosy picture to the management. It is important to remember that NSE’s press release also said that it did not invoke the disaster recovery site based on management consultations. So what actually is a disaster recovery site? In layman terms it is an alternate site (which can be within the same city or another city) which is capable of running all the operations designed in a primary site. The genesis of a disaster recovery site is the business continuity policy (BCP). This document lists all IT systems that hold the data of an organisation, its dependencies with other systems, and all the elements which are necessary to run the system


We’ll never have true AI without first understanding the brain

The vast majority of AI researchers don’t really embrace the idea that the brain is important. I mean, yes, people figured out neural networks a while ago, and they’re kind of inspired by the brain. But most people aren’t trying to replicate the brain. It’s just whatever works, works. And today’s neural networks are working well enough. And most people in AI have very little understanding of neuroscience. It’s not surprising, because it’s really hard. It’s not something you just sit down and spend a couple of days reading about. Neuroscience itself has been struggling to understand what the hell’s going on in the brain. But one of the big goals of writing this book was to start a conversation about intelligence that we’re not having. I mean, my ideal dream is that every AI lab in the world reads this book and starts discussing these ideas. Do we accept them? Do we disagree? That hasn’t really been possible before. ... An AI that can detect cancer cells is great. But is that intelligence? No. In the book I use the example of robots on Mars building a habitat for humans. Try to imagine what kind of AI is required to do that. Is that possible? It’s totally possible.


Why paying off ransomware gangs is not a good idea

So what is the alternative? Insurers, especially in the US, urge their clients to quickly and quietly pay the ransom to minimise the damage of disruption. Then insurers allow the company to claim back the ransom payment on their insurance and raise their premiums for the following year. This payment is usually handled discreetly by a broker. In essence, the ransomware ecosystem functions like a protection racket, effectively supported by insurers who are set to pocket higher premiums as attacks continue. Aside from the moral objections we might have to routinely paying money to criminals, this practice causes two important practical problems. First, it encourages complacency in cybersecurity. This complacency was best exemplified when a hacked company paid a ransom, but never bothered to investigate how the hackers had breached their system. The company was promptly ransomed again, by the same group using the very same breach, just two weeks later. Second, some ransomware gangs invest their ill-gotten gains into the research and development of better cyber-tools. Many cybersecurity researchers are concerned about the increasing sophistication of the malware used by leading cybercrime groups such as REvil or Ryuk, which are both thought to be based in Russia.


Credential exposure trends: You need a better password

The data recovered by SpyCloud researchers includes more than 4.6 billion pieces of personally identifiable information (PII), including names, addresses, birthdates, job titles, social media URLs and nearly 1.3 billion phone numbers. Criminals use PII to create fake accounts or steal someone else’s identity and then apply for lines of credit, intercept tax refunds, drain bank accounts and more. With as little as one or two pieces of PII, they can compromise a person’s identity. Despite years of advice about the importance of strong passwords, people inevitably end up reusing or recycling the same credentials for multiple sites. Outdated password complexity requirements have complicated the issue by providing people with a false sense of security when they recycle a favorite password with a few simple changes, like capitalizing the first letter and adding a 1 or ! at the end. Industry standards call for organizations to convert plaintext passwords into hashes so if they are breached, criminals can’t easily access the passwords themselves. But some hashing algorithms are computationally harder to crack than others. Unfortunately, even the strongest hashing algorithm means little when users make weak or common password choices.


The office of the future is about people not places

Experts suggests the future of employment will be a hybrid mix of office- and home-working. As many as 90% of HR leaders believe employees will carry of working remotely in the post-COVID age, says tech analyst Gartner. That shift to hybrid working is something that resonates with Paul Coby, CIO at global science and chemicals company Johnson Matthey, whose guess is that people in the future will split their time between working from home and going into the office. What that means for many of us is that the traditional nine-to-five working day at the corporate HQ isn't coming back. "It seems to me that sitting in an office doing emails doesn't seem like a great use of offices or an individual's time," says Coby. Other experts agree. Researcher CCS Insights predicts more than half of all office-based employees will still work mainly remotely through 2022. IoD research, meanwhile, suggests more than half of business leaders plan on reducing their long-term use of workplaces, with more than one in five reporting use will be significantly lower. The office that many of us knew – with its command-and-control leadership styles – is probably gone forever. Get it right, and managers could change how we all work for the better. But quite how business leaders will organise and manage the hybrid workplace of the future is still very much up for debate right now.


Why global power grids are still vulnerable to cyber attacks

“Power grids are getting increasingly vulnerable because of digitalization and the use of more smart applications,” said Daine Loh, a Singapore-based power and renewables analyst at Fitch Solutions. It’s a threat highlighted in an initial probe in India that found an October blackout in Mumbai may have been caused by cyber sabotage. That outage impacted stock markets, trains and thousands of households in the nation’s financial hub. The disruptive potential of grid failures — as seen in Texas last month due to a sudden deep freeze — makes the sector a key target, particularly for state-based hostile actors. Over the past four decades, power plants and substations have been moving from manual to automatic controls, and are increasingly being connected to public and private networks for remote access, leaving them exposed to attacks. Producers and distributors have also often been reluctant to spend on protecting themselves against low-probability attacks. “India’s power system is in urgent need of proper cybersecurity systems,” said Reji Kumar Pillai, president of India Smart Grid Forum, a think-tank backed by the federal power ministry and which advises governments, regulators and utilities. 


Version 2 of Google’s Flutter toolkit adds support for desktop and web apps

Over the course of the last year or so, the team started working on what it calls Canvas Kit. This WebAssembly-based project takes the same Skia graphics engine that powers Android and Chrome itself and makes it available to web apps. “What that’s meant is that we can now essentially bypass the core HTML — sort of the document-centric parts of the web platform — and really use the app-centric parts of the web platform without leaving [behind] things like auto-complete of text or passwords and all the things that keep the web feeling very unique,” Sneath said. On the desktop, Google is announcing that Canonical is going all-in on Flutter and making it the default choice of all its future desktop and mobile apps. Microsoft, too, is expanding its support for Flutter and working with Google on Windows support for Flutter. Given Microsoft’s interest in Android, that’s maybe no huge surprise, and indeed, Microsoft today is releasing contributions to the Flutter engine to help support foldable Android devices. In total, Google notes, there are now over 15,000 packages for Flutter and Dart from companies like Amazon, Microsoft, Adobe, Huawei, Alibaba, eBay and Square.


Data Science Environments

Environment managers are critical to the concept of virtual machines. A virtual machine (VM) can be thought of as a computer inside of your computer (Inception style). As with everything mentioned here, a virtual machine is simply a method of file management. The VM is a program that isolates itself from all the other files on the computer; it may even run a different operating system. (For example, a MacOS computer might have a virtual machine running a Windows operating system.) Virtual machines are very effective when you want to recreate a programming environment on another computer, because you don’t need to know anything about that computer’s current dependencies. You can set up a virtual machine that has exactly the files visible to it that you have on your own computer’s virtual machine. For this reason, data science heavily utilizes virtual machines; they ensure reproducibility of results. Cloud-based applications are another common example of VMs. The developer counts on the fact that their VM will be isolated from any other code living on the server hosting their application.


8 mobile security threats you should take seriously

A mobile device is only as secure as the network through which it transmits data. In an era where we're all constantly connecting to networks that might not be optimally secured — be they improperly configured home networks, for remote workers, or public WiFi networks — our information frequently isn't as protected as we might assume. Just how significant of a concern is this? According to research by Wandera, in a more typical year, corporate mobile devices use WiFi almost three times as much as they use cellular data. Nearly a quarter of devices connect to open and potentially insecure WiFi networks, and 4% of devices encounter a man-in-the-middle attack — in which someone maliciously intercepts communication between two parties — within an average month. Those numbers have dipped this past year due to reduced travel and fewer physical businesses being open during COVID, but that doesn't mean the threat is gone — or that there's no need to remain ahead of the game, even with employees working mostly from home. "Rather than relying on man-in-the-middle attack detection to be reactive, we recommend organizations take a more proactive approach to securing remote connections," says Michael Covington


Risky business: 3 timeless approaches to reduce security risk in 2021

Some organizations see migrating to the cloud as a way of creating greenfield environments that can be secured more easily than the tech-debt-laden on-premises network. Yes, cloud adoption does have many security benefits. And, yes, the cloud does represent an opportunity to do things over better. With a greenfield architecture, with best practices baked neatly into it. Only, many organizations’ lack of cloud security processes and controls can quickly eat away at the cloud’s forecasted risk reduction benefits, reducing the overall value of this opportunity. You should keep in mind that AWS S3 buckets are left unsecured and publicly available all the time. Developers have been known to leave (or hardcode) private keys and tokens in the most embarrassing places. Personnel leave files unsecured and open to the public far too often. Publicly available endpoints are improperly segmented from private networks, sometimes creating an autobahn towards the on-premises network. Many organizations don’t do a good enough job of monitoring activity in the cloud or the information flow between the cloud and the on-premises network.



Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer

Daily Tech Digest - March 04, 2021

Proptech disruption trends: innovation in the real estate space

Users have found that RPA can learn how to complete administrative tasks, leaving more time to spend on duties that require a more human touch, such as customer service. Tom Reiss, CEO of Roby AI, explained: “By learning how a user carries out a task, RPA can then be custom built and combined with tasks which require an element of human touch. This approach means that companies can become hugely efficient, and staff are no longer weighed down with laborious tasks. “Whilst some companies have traditionally feared proptech, this kind of clever technology can be implemented easily alongside existing structures. In turn, creating minimal disruption to the business and maximum output when it comes to efficiency, cost saving and employee satisfaction.” ... “Video surveillance can be performed in real time, or data can be collected and stored for the purpose of evaluation when required.” Lodhia went on to explain how the cloud has further facilitated safety measures, which have benefitted from remote monitoring and management, particularly during the Covid-19 pandemic. “The impact of cloud technology has had a dramatic impact on proptech, and there are two main benefits,” Lodhia said.


How the Digital Twin Drives Smart Manufacturing

One of the initial areas of focus for implementation of the digital twin has been asset lifecycle management (ALM). Maintaining assets in the field has traditionally been a time-consuming and costly task, but critical to equipment and system uptime. Today, maintenance technicians can leverage technologies like augmented reality (AR) that allows them to access virtual engineering models and overlay these models over the physical equipment on which they are performing maintenance using specialized AR goggles or glasses. This enables them to use the most accurate and up-to-date engineering, helping ensure that the correct maintenance and performance specifications are performed efficiently. These same maintenance methods, based on merging of virtual and physical environments, can be applied to factory production systems, machines, and work cells. In addition, products, production systems, machines, and work cells can be simulated virtually to test and validate physical systems prior to assembly and installation. Moreover, the virtual commissioning of production automation—an established technology and process—is merging with the more expansive scope of the digital twin.


What's between your clouds? That's key to multi-cloud performance

First, you need management and monitoring layers. These include AIOps, security managers, governance tooling, and other technologies that can manage and control heterogeneous cloud deployments. The management and monitoring layers are just as important—perhaps even more so—than are the native services that run on those public clouds. These layers of software systems become the jumping-off point for modern cloud operations, and they can operate without leveraging cloud-specific systems as you move forward. Second, public cloud providers are beginning to invest in cross-cloud solutions. Most won't mention the word multi-cloud, but they plan to support this architecture, nonetheless. This puts the nail in the coffin of less complex, single-cloud deployments that do not take advantage of best-of-breed. Some people remain skeptical that public cloud providers will build technology that will integrate with the competition, but the providers really have no other choice. Remember when Apple and Microsoft devices could not communicate? Cloud vendors do. This is not a new trend. Enterprises will continue to move to multi-cloud as the preferred cloud deployment platform, and that move is to the middle.


How We’ll Conduct Algorithmic Audits in the New Economy

Lack of transparent accountability for algorithm-driven decision making tends to raise alarms among impacted parties. Many of the most complex algorithms are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years. Algorithms’ seeming anonymity -- coupled with their daunting size, complexity and obscurity -- presents the human race with a seemingly intractable problem: How can public and private institutions in a democratic society establish procedures for effective oversight of algorithmic decisions? Much as complex bureaucracies tend to shield the instigators of unwise decisions, convoluted algorithms can obscure the specific factors that drove a specific piece of software to operate in a specific way under specific circumstances. In recent years, popular calls for auditing of enterprises’ algorithm-driven business processes has grown. Regulations such as the European Union (EU)’s General Data Protection Regulation may force your hand in this regard. GDPR prohibits any “automated individual decision-making” that “significantly affects” EU citizens. Specifically, GDPR restricts any algorithmic approach that factors a wide range of personal data -- including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions.


A quantum internet is closer to reality, thanks to this switch

For a quantum internet, forming connections between users and adjusting bandwidth means distributing entanglement, the ability of photons to maintain a fixed quantum mechanical relationship with one another no matter how far apart they may be to connect users in a network. Entanglement plays a key role in quantum computing and quantum information processing. "When people talk about a quantum internet, it's this idea of generating entanglement remotely between two different stations, such as between quantum computers," said Navin Lingaraju, a Purdue Ph.D. student in electrical and computer engineering. "Our method changes the rate at which entangled photons are shared between different users. These entangled photons might be used as a resource to entangle quantum computers or quantum sensors at the two different stations." Purdue researchers performed the study in collaboration with Joseph Lukens, a research scientist at Oak Ridge National Laboratory. The wavelength-selective switch that the team deployed is based on similar technology used for adjusting bandwidth for today's classical communication.


What is a solutions architect? A vital role for IT-business alignment

A solutions architect is responsible for evaluating an organization’s business needs and determining how IT can support those needs leveraging software, hardware, or infrastructure. Aligning IT strategy with business goals has become paramount, and a solutions architect can help determine, develop, and improve technical solutions in support of business goals. A solutions architect also bridges communication between IT and business operations to ensure everyone is aligned in developing and implementing technical solutions for business problems. The process requires regular feedback, adjustments, and problem-solving in order to properly design and implement potential solutions. Solution architecture itself encompasses business, system, information, security, application and technology architecture. Some examples of solutions architecture include developing cloud infrastructure for efficiency, implementing microservices for ecommerce, or adopting security measures for data, systems, and networks. While the scope of the job can vary depending on a business’ specific needs, there are certain responsibilities, skills, and qualifications that solutions architects will need to meet to get the job.


Digital transformation: 5 new realities for CIOs

We’re not just working from home but also attending school, shopping, and conducting all essential communications without ever walking out the front door. Many jobs that we previously thought were only doable from the job site can now be done remotely. Product development teams now have their living rooms and garages full of parts, equipment, etc., harkening back to the early start-up era for companies like Apple, HP, Microsoft, and others. Of course, the more we do from home, the more our finite bandwidth resources are taxed. Traditional peak hours for internet usage were in the evening, but with everyone home 24/7, streaming everything simultaneously, the Wi-Fi is straining to remain stable during an employee’s more ideal work hours. We must equip WFH employees with the technology and bandwidth they need to be productive and efficient. Allocate budget to upgrade employees’ home networks to premium bandwidth. Nothing causes more headaches than choppy bandwidth on Zoom when trying to support clients. ... With the move to the cloud and WFH, we’re now forced to manage a high-threat environment every time an employee fires up a laptop or mobile phone and taps into the company network or cloud resources.


AI in Hydroponics: The Future Of Smart Farming

AI-driven’ Smart Hydroponics’ can determine optimum growth for a plant through a combination of hardware setup and a software tool that can recreate its growth trajectory. Insights are generated from data obtained by sensors in the hardware. The sensing hardware is divided into three categories, each of which is strategically placed within the hydroponics farm. They sit near the plant roots and collect data about the crop vitals, pH levels, electrical conductivity levels, and nutrient supply. They also detect light density, temperature, and humidity levels. A visual camera also checks the growing plants for colouration and feeds the data to the AI software. On the other hand, insights about the precise nature and needs of the products are generated through machine learning. The AI software system works like the brain behind the entire Hydroponic farm. It can choose between different types of LED lighting and modulate light intensity. It can even turn on a suitable irrigation system. It drives end-to-end automation so that fewer manual tasks are delegated. Currently, AI in Hydroponics in India may be in a fledgeling state.


Fintech disruption trends: a changing payment landscape on the horizon

With such a dramatic drift shift to digital commerce, largely accelerated by the impact of Covid-19, demand for software-based payment technology will exponentially increase, according to Justin Pike, founder and chairman of MYPINPAD — the software-based payments company. In this digital world the consumer is opened up to a variety of choice that can’t be replicated in the physical world and competition is fierce. “Consequently, consumer facing brands have recognised the criticality of technology that can significantly improve the customer experience,” says Pike. He believes that software-based payment technology forms the missing piece of the puzzle in terms of innovating and improving the customer experience in a remote environment, where the customer experience is completed on mobile devices. “Standardisation of the payment experience through software, across all channels (both online and offline) is where we are rapidly heading. This innovation will bring a myriad of benefits for both consumer and brand, but it absolutely must be built on a foundation of security,” Pike continues. “For merchants, the opportunity to reach new markets and customers is too good to miss.


Arguing your way to better strategy

Iterative visualization is achieved by creating a strategy map, which tracks a proposed strategy’s causal path backward from its desired outcome to the factors required to make it happen. The authors illustrate this process by producing a strategy map based on statements about Walmart’s low-cost model, which enabled the retailer to attract customers and vanquish competitors in the pre-digital economy. Working backward from the desired outcome of low costs, they map two of its enablers: operational efficiencies and a bargaining advantage over suppliers. In turn, they enumerate the enablers of those enablers, which for bargaining include high-volume purchasing, negotiating prowess, and private labels. And so on. A strategy map is only the first step in making a strategy argument. “At this stage,” Sørensen and Carroll explain, “these statements are just unfounded claims in the strategy argument, and their veracity and importance have yet to be demonstrated.” That work begins in the second set of activities — logical formalization. Logical formalization entails testing the validity and soundness of the premises underlying the statements in a strategy map.



Quote for the day:

"If you only read the books that everyone else is reading, you can only think what everyone else is thinking." -- Haruki Murakami