Daily Tech Digest - May 31, 2020

The Future Of Fintech: The New Normal After The Covid-19 Crisis

For banks, the new normal marks the end of fintech experimentation. Over the past few years, banks have been obsessed with fintech partnerships. It’s been a way of convincing themselves (and their boards) that they’re innovating and not getting left behind as the industry undergoes a digital transformation. Too many of these efforts, however, have had little impact on the strategic direction, organizational culture, and bottom line results of the institution. According to Louise Beaumont: “For banks, partnerships won’t generate the quantum leap they need to move beyond a product-centric mentality to deliver next-generation services. At best, they may gain a workable solution that squats awkwardly in the existing infrastructure. At worst, they’ll fail to deliver any noticeable difference.” Many so-called partnerships—many of which aren’t partnerships, but just vendor arrangements—are examples of what Jason Henrichs of Fintech Forge likes to call the “fintech petting zoo.” The luxury of experimenting with fintech is gone. Banks will need to accelerate their investments in fintech to achieve both the top line increases and expense reductions needed to maintain margins and profitability.

ACLU sues Clearview AI claiming the company's tech crosses ethical bounds

The ACLU alleges that by using face recognition technology, Clearview has captured more than 3 billion faceprints from images available online, all without the knowledge or consent of those pictured. "Clearview claims that, through this enormous database, it can instantaneously identify the subject of a photograph with unprecedented accuracy, enabling covert and remote surveillance of Americans on a massive scale," it said. "This technology is so dangerous, in fact, that this little-known startup 'might end privacy as we know it'." The ACLU said that Clearview has "created the nightmare scenario that we've long feared, and has crossed the ethical bounds that many companies have refused to even attempt" and accused the company of building a mass database of billions of faceprints without knowledge or consent. "Neither the United States government nor any American company is known to have ever compiled such a massive trove of biometrics," it wrote. "Adding fuel to the fire, Clearview sells access to a smartphone app that allows its customers -- and even those using the app on a trial basis -- to upload a photo of an unknown person and instantaneously receive a set of matching photos."

GoodData and Visa: A common data-driven future? 

One of the initiatives GoodData is taking to help organizations go from dashboards to data-driven application is the Accelerator Toolkit. The Accelerator Toolkit is a UI library to enable customized and faster data analytics, along with educational resources. Stanek mentioned that GoodData plans to launch a GoodData University initiative soon, to offer more resources to empower organizations. Another noteworthy development for GoodData is the evolution of its Semantic Layer data model. A new modeling tool by GoodData aims to improve collaboration between engineers and analysts to streamline the start process for enterprise data products. Stanek initially referred to this as an attempt to establish a single version of the truth. This, however, has always been an elusive goal. While improving collaboration between engineers and analysts is commendable, more pragmatically, organizations can aim to establish shared data models among user groups, rather than global ones. Stanek did not sound short of ambition, and our conversation touched upon a number of topics. If you want to listen to it in its entirety, make sure to subscribe to the Orchestrate all the Things podcast, where it will be released soon.

Building the foundation for a strong fintech ecosystem in Saudi Arabia

Prior to Co-VID 19 and its sudden need for global digitalisation, there was already potential for Saudi Arabia to have a strong fintech network. It is the largest economy in the region, where its stock market is worth around $549 billion USD, contributing to over half of the region’s total gross domestic product (GDP) in 2018, and is a member of the Group of Twenty (G20); this year it is actually Saudi Arabia that holds the G20 presidency. Also, Saudi has a very young population, where 70 percent of the population in 2017 was under 30 years old. It is also a very tech savvy nation, where it ranks, according to a report by EY, as having the third highest smartphone mobile usage globally and the seventh globally in terms of household internet access. This, coupled with the ongoing economic initiatives and investments as part of Saudi Vision 2030, has put Saudi’s fintech prospects and future growth at the forefront. ... Saudi Arabia has an opportunity to further solidify its position to one day be a leader in fintech. It has already, as part of Vision 2030, set the foundation to create an environment that not only is attracting foreign investment but also providing the tools and guidance to create its own talent and innovation as well.

Why Blockchain Needs Kubernetes

Kubernetes and Docker can, and have, abstracted away much of the knowledge required to get started. IBM and Corda have containerized their blockchain protocols and various Ethereum images exist - for added granularity, network component images exist as well, including the Solidity compiler, network stats dashboard, testnets, miner nodes, block explorers, etc. In time, I expect to see more and more component network parts containerized and made available. Deploying blockchains will be a matter of picking a protocol image and the additional components images, building YAML manifests, and deploying with helm install. While modularity is necessary for designing complex networks and is available for those that need it, the choice overload can and will deter adoption for those that do not have the expertise, time, patience, or resources to explore blockchain technology. By packaging up elements of blockchain networks into image files that can be deployed and managed, the requisite knowledge required to get started will be democratized to those that are familiar with Docker and Kubernetes.

COVID-19 is teaching investors a thing or two about how important an opportunity “edtech” is

In spite of the billions invested across the world in the latest and greatest innovations, technology hasn’t been able to stop or impact the spread of COVID-19 on any notable scale, something embarrassing to us all. As a result, investors broadly have decided to support the industries and tech where significantly less funding had been placed historically. As an example, we at Perlego have received five times more approaches from new venture capitalists (VCs) and angels since the lockdown. I believe this is for one of two reasons: they either want to help a future society or they’ve seen failures in the likes of medicine, education and ecotech at this time and see these as the new fintechs in the years to come. Regardless of the reason, what is essential is to place more focus on the sectors that were previously seen as poor relations to their shiny counterparts. Investment, growth and the opportunity to succeed must be further developed; such is the necessity for innovation on a global scale. It’s sad that it has taken a global crisis to trigger this thinking.

Optimizing MDM With Agile Data Governance

The embarrassing truth is that most organizations cannot answer these seemingly simple questions, at least without serious effort. In addition, many organizations have been reporting erroneous customer figures as different silos and lines of business fail to work cohesively to manage their master data assets. The annual cost and impact of data quality issues that are rooted in ungoverned data with little or no formal accountabilities around critical enterprise data have propelled the need for many organizations to fix their MDM problem. It’s evident that the need for ‘trusted data’ continues to appear in nearly all data initiatives. However, most organizations are still struggling with their MDM rollout simply because it’s addressed from a one lens angle. It’s one thing to fix the problem by mastering the formerly bad data; it’s another thing to make the solution sustainable by treating the root problem of disparate common data. The value of a ‘Stewardship culture‘ around data assets cannot be overemphasized. For MDM to be sustainable and rightfully implemented, it must be positioned in a governed environment where stewardship around the mastered data, and the associated culture of data governance are implemented.

Unify: Architecting the Missing Link in Data Management

No matter what label or acronym the industry attributes, it comes down to a simple truth that you need a dose of reality before tackling data management. “All recognize the fact that it is impossible for organizations to physically centralize all their data. Instead, data virtualization lets organizations provide one “virtual” place to go for data consumers to access data and IT to provide it,” says Eve.  Next, companies need to have a strategy to tool up for “next-generation data management.” “Gartner’s advice to consolidate with their data management tooling in vendor suites such as TIBCO Unify that combine metadata management, master data management, reference data management, data catalog, data governance, and data virtualization within one integrated solution,” says Eve. Data management should not be an IT problem alone. Businesses can chip in by increasing their citizen data engineering pool and offering business domain advice. “Work together to assess your needs and skills. Then be smart about maximizing the value each side can contribute, for example, IT using TIBCO Data Virtualization to provision hundreds of reusable data services that the business can quickly mix and match to address their changing needs,” says Eve.

ZLoader Banking Malware Resurfaces

Zloader has an element that downloads and runs the banking malware component from its command-and-control server, researchers at Proofpoint say. ZLoader spread in the wild from June 2016 to February 2018, with a group called TA511 - aka MAN1 or Moskalvzapoe - being one of the top threat actors spreading the malware, the report adds. The ZLoader malware uses webinjects to steal credentials, passwords and cookies stores in web browsers, and other sensitive information from customers of banks and financial institutions, according to Proofpoint. The malware then lets hackers connect to the infected system through a virtual network computing client, so they can make fraudulent transactions from the users device. The researchers note that the latest variant seemed to be missing some of the advanced features of the original ZLoader malware, such as code obfuscation and string encryption, among other features. "Hence, the new malware does not appear to be a continuation of the 2018 strain, but likely a fork of an earlier version," the researchers state.

Opening the doors to greater data value with data catalogue

If data isn’t consistent, comprehensive, and accurate, digital transformation efforts may fall short of objectives in a wide range of areas, such as: Laying the foundations for advanced analytics. Data scientists often spend 80% of their time searching for data, and just 20% on actual AI/ML and modeling. A data catalogue reverses the equation by providing quick data discoverability and access to relevant information. That lets data scientists and business analysts use trusted data to deliver the insights needed for data-driven decision-making. Developing a 360 degree customer experience. Because customer data exists in so many corners of the enterprise, it’s essential for organisations to have a holistic 360-degree view across all sources if they are to truly understand customers as individuals. By identifying all key sources of customer data, a data catalogue provides the foundation for more personalised engagement and improved customer experience. Supporting and accelerating smooth cloud data migration. 

Quote for the day:

"Develop success from failures. discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie

Daily Tech Digest - May 30, 2020

Tips on Digital Adoption and Transformation from Tesla

Pushback by people resistant to change regardless of the potential value of the ideas or technology can stall adoption, Davies said. The friction can come from a refusal to part ways with familiar, comfortable methods. Such reluctance may be reinforced by a lack of awareness of features and functionality, she said, found in the innovations being introduced. To get teams on board with change, Davies said it is essential to show them the new technology is better than what they already use. “This was Tesla’s strategy when introducing a huge transformation with electric cars,” she said. The same strategy used to get consumers to consider migrating from internal combustion engines to electric vehicles can be applied, Davies said. Common presumptions about electric cars painted them as slow, ugly, and limited on range. When Tesla unveiled the first Roadster, the company promoted its visual aesthetics and performance, she said, capable of accelerating from 0-60mph in 3.7 seconds and with an operational range of 245miles on a full charge, “This car was a critical step in the digital transformation to electric cars,” Davies said.

Please, Keep Artificial Intelligence From Becoming Another Out-Of-Touch Bureaucracy

AI inherently operates just like bureaucracies, he adds. “The essence of bureaucracy is to favor rules and procedures over human judgment. And if human judgment is not kept in the loop, AI will bring a terrifying form of new bureaucracy — I call it ‘algocracy,’ where AI will take more and more critical decisions by the rules outside of any human control.” The results of bureaucratic algocracy could be devastating — affecting university admissions, aircraft performance, or supply chain issues when a crisis hits. That’s why there needs to be humans providing input into AI decisions. It should be added that it takes humans to design forward-thinking processes and companies — tools such as AI are only that — tools that will help make things happen. As with many technology innovations, it often gets assumed that by dropping AI into a moribund, calcified organisation, insights and profitability will magically clear things up. AI should serve as “augmented” intelligence to support human decision-making — not the other way around.

Walmart Employees Are Out to Show Its Anti-Theft AI Doesn't Work

In an effort to refute the claims made in the Business Insider piece, the Concerned Home Office Associates created a video, which purports to show Everseen’s technology failing to flag items not being scanned in three different Walmart stores. Set to cheery elevator music, it begins with a person using self-checkout to buy two jumbo packages of Reese’s White Peanut Butter Cups. Because they’re stacked on top of each other, only one is scanned, but both are successfully placed in the bagging area without issue. The same person then grabs two gallons of milk by their handles, and moves them across the scanner with one hand. Only one is rung up, but both are put in the bagging area. They then put their own cell phone on top of the machine, and an alert pops up saying they need to wait for assistance—a false positive. “Everseen finally alerts! But does so mistakenly. Oops again,” a caption reads. The filmmaker repeats the same process at two more stores, where they fail to scan a heart-shaped Valentine’s Day chocolate box with a puppy on the front and a Philips Sonicare electric toothbrush. At the end, a caption explains that Everseen failed to stop more than $100 of would-be theft.

How AI is transforming recruitment and hiring

Traditionally the recruiter is the person who sources the resume from various sources. This is a time-consuming process, and is also prone to human errors. There are many tools available today that can match your job description to resumes on recruitment portals and help you build a database of the most relevant candidates. These AI-powered tools use pattern matching algorithms to make sure the resume is a close match to that of the job description. They typically use AI and pattern matching algorithms to match resumes to job descriptions.  For example, if the recruiter is looking for a marketing professional with 3-5 years of experience with a salary of Rs 12, 00,000 per annum who stays within 10 km of their office, the standard search may throw up 30 candidates. However, using AI, the software is capable of suggesting that if the experience desired is increased to 6 years and the salary to Rs 15, 00,000, there would be 50 candidates ideal for the profile. This data is useful to recruiters who need to understand where to get the maximum best-suited candidates from. This results in the elimination of manual efforts and a significant reduction in the number of unsuitable candidates, thus improving the process.

Singapore researchers tapping quantum cryptography to enhance network encryption

ST Engineering's president of cybersecurity systems group, Goh Eng Choon, said: "The threat landscape is evolving very rapidly and we must be prepared for challenges to come in the post-quantum computing era. While QKD technology can be used to secure digital communications, it can also be used to mitigate future quantum computers being used to exploit and maliciously target weak links and disrupt the global encryption ecosystem. "This research into quantum cryptography and the co-development of the industry's first solution will allow us to explore the potential of this technology, further strengthen our arsenal of advanced cybersecurity solutions, and gain a foothold in the QKD market," Goh said. NUS currently is working with nanoelectronics institute companies to jointly develop new chip-based quantum crypto devices, which can be applied to the new MDI-QKD technology and broader quantum cryptography technology due to their smaller device footprint and lower cost. NUS' assistant professor Charles Lim Ci Wen, who leads the joint project with ST Engineering, said: "As quantum computing becomes more prevalent worldwide, information security threats will also become more advanced.

Leaders discuss challenges, strategies for women in IT

"Things are only going to change over time, the more we continue to support and promote diversity, diverse teams and allowing different perspectives to prevail and not always sticking with the same old thing that works," Mayshar said. Constellation Research's Miller agreed, explaining that the push for women in the tech industry is not just to get them in there, but also to keep them in there and create visibility for the next generations. "I think that more girls are going to see women in leadership positions in technology companies -- they're going to see more women founders, they're going to see more women CEOs," Miller said. As for the new reality that is remote work in most businesses due to the COVID-19 pandemic, Ray-Pope said it could actually benefit women in IT. "There are ways I am also maximizing my time because of being home," she said. "I do think that we in corporate are in a unique position. I do think, in some ways, I have seen the playing field leveled … [working from home] is opening doors for women who choose to take advantage of it. There is no backroom networking." Juggling between home and work lifestyles isn't a new challenge for women, Miller said.

Why authentic informal leaders are key to an organization’s emotional health

AILs have excellent emotional sensing and energizing capabilities. They naturally detect feelings at play in any organizational challenge, capture and create positive emotions, and know how to influence and encourage people to engage in important behaviors. Management can mobilize them as a powerful resource to learn and identify how to respond in moments of crises. When appropriate, they can also counterbalance negative feelings. ... AILs can be engaged and activated in a variety of ways — many of them virtual. For example, emotionally intelligent AILs may be asked to launch an effort to understand and find ways to manage the organization’s fear and bolster individual confidence with respect to COVID-19. They can launch virtual small communities that meet regularly to discuss how they are motivating their teams. They can organize a Facebook group or another group to discuss topics informally. Rather than mandate that AILS act in a specific way, formal leaders should ask AILS how best to engage and activate them. 

Secure Together: is your organisation prepared for the end of lockdown?

Many have found the sudden shift to remote working so smooth that they are happy to make it a permanent move. A Gartner study revealed that 41% of employees want to continue working remotely some or all of the time after lockdown – up from 30% before the pandemic. However, while there are clearly benefits to remote working, things won’t be the same when offices reopen and there will be new challenges to address. For a start, working from home in a post-lockdown scenario might feel even more isolating, as you and your colleagues are no longer bound by being ‘in it together’. Those who return to the office are likely to resume normal work practices and may not be able to make the time to socialise with remote workers. Likewise, technical difficulties will probably take longer to resolve, as the IT team will once again be prioritising the systems and employees on the premises. These are issues that organisations and their employees should consider carefully as we ease our way out of lockdown.

Debunking The Myth That Greater Compliance Makes IT More Secure

Excelling at compliance doesn't protect any business from being hacked, yet pursuing a continuous risk management strategy helps. With a few exceptions (such as spearphishing), cyberattacks are, by nature, brutally opportunistic and random. They are driven to disrupt operations at best and steal funds, records, and privileged access credentials at worst. Conversely, the most important compliance event of all, audits, are planned for, often months in advance. Governance, Risk, and Compliance (GRC) teams go to Herculean efforts to meet and exceed audit prep timelines working evenings and weekends. ... The truth is organizations are attempting to rationalize the high costs of compliance by looking for how GRC spend can also improve cybersecurity. This is a dangerous assumption, as Marriott's third breach indicates. Marriott is an excellently managed business and sets standards in compliance. Unfortunately, that hasn't thwarted three breaches they've experienced. Why are organizations assuming GRC spending will improve cybersecurity? It's because both areas share a common series of pains that require different solutions

The Android hardware truth Google won't tell you

Plain and simple, buying an Android tablet is setting yourself up for disappointment — when it comes to both performance and capability and when it comes to the critical areas of privacy, security, and ongoing software upkeep. So when people ask me which Android tablet they should buy, you know what I tell 'em nowadays? They shouldn't buy one at all. If they want a Googley, Android-connected experience in a large-screen form, they should consider a decent convertible Chromebook instead. The exception — and Chrome OS's remaining weakness — is in the area of super-affordable, small-slate tablets. You can get a crappy Amazon-made Fire tablet for 50 bucks! And Chromebooks have yet to come around to address that demand. So if you're looking for a dirt cheap video screen or, say, something for a child to use, the low-end Android tablets might still be your only real option. When it comes to productivity and actual work-friendly devices, though — situations where the computing experience itself matters and where having an optimally secure, privacy-conscious, and performance-optimized environment is important — the common advice out there is increasingly misguided.

Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - May 29, 2020

Cases dealt with by AI courts rely heavily on blockchain evidence. For the uninitiated, blockchain is literally a chain of digital blocks. It is the system of storing digital information (the block) in a public database (the chain). Blockchain preserves information about transactions like the date, time and purchase amount etc. A classic illustration would be a purchase on Amazon. It contains a series of transactions which are recorded and kept on a digital platform. Each ‘block’ added to the ‘chain’ comes into the public domain, where it remains preserved. The critical question is, is blockchain tamper-proof? Is alteration of its data impossible by human intervention? Is blockchain data immutable and time-stamped, and can it safely be used as an auditable trail? The judges in China think so. China’s Supreme People’s Court has put matters to rest. It has ruled that evidence authenticated with blockchain technology is binding in legal disputes. It ruled, "...internet courts shall recognize digital data that are submitted as evidence if relevant parties collected and stored these data via blockchain with digital signatures, reliable timestamps and hash value verification or via a digital deposition platform, and can prove the authenticity of such technology used."

GitHub Supply Chain Attack Uses Octopus Scanner Malware

When Octopus Scanner lands on a machine, it looks for signs indicating the NetBeans IDE is in use on a developer's system, GitHub security researcher Alvaro Muñoz explains in a blog post on their findings. If it doesn't find anything, the malware takes no action. If it does, it ensures that every time a project is built, any resulting JAR files are infected with a dropper. When executed, the payload ensures persistence and spreads a remote access Trojan (RAT), which connects to C2 servers. The malware continues to spread by infecting NetBeans projects, or JAR files. This way, it backdoors healthy projects so when developers release code to the public, it contains malware. The goal of Octopus Scanner is to insert backdoors into artifacts built by NetBeans so the attacker can use these resources as part of the C2 server, Waisman says. "When the end user deploys the workload, they have given the attacker access via the backdoor to their resources for use as part of a command-and-control server," he adds. 

How the coronavirus pandemic is affecting developers' mental health

Working from home has always included controversy. While two-thirds of employees prefer to do so--more than a third would choose this perk over a pay raise and another 37% would take a 10% pay cut to stay home--management has traditionally been less than thrilled with the idea. It's often been viewed by executives as a way for workers to underperform in their roles or fly under the radar. As a result, given that many organizations now have no choice but to promote work-from-home capabilities, these are being doled out with increased expectations and heftier accountability requirements. The economic downturn and threat of looming layoffs don't help the situation. I can say I've put in more hours than ever before proving my value in my role to ensure that the systems and services for which I am responsible stay up and running. ... Without commutes it can seem like there are more hours in the day, but at the same time there aren't clear breaks between home and work time, nor the regular breaks for mentally recharging like going out for coffee or even just visiting the snack area and talking to coworkers.

Create Deepfakes in 5 Minutes with First Order Model Method

The basis of deepfakes, or image animation in general, is to combine the appearance extracted from a source image with motion patterns derived from a driving video. For these purposes deepfakes use deep learning, where their name comes from (deep learning + fake). To be more precise, they are created using the combination of autoencoders and GANs. Autoencoder is a simple neural network, that utilizes unsupervised learning (or self-supervised if we want to be more accurate). They are called like that because they automatically encode information and usually are used for dimensionality reduction. It consists of three parts: encoder, code, and decoder. The encoder will process the input, in our case input video frame, and encode it. This means that it transforms information gathered from it into some lower-dimensional latent space – the code. In this latent representation information about key features like facial features and body posture of the video-frame is contained. In lame terms, here we have information about what face is doing, does it smile or blinks, etc. 

Mobile security forces difficult questions

When it comes to security, compliance and what IT or Security have the right to do, neither is demonstrably better, unless you're willing to put rights and restrictions in writing and — this is the hard part — enforce them. The biggest worry for either modes involves remote wipe. When a device is suspected to have been stolen, remote wipe needs to happen, to reduce the chance of enterprise data being stolen or an attack being waged. That question becomes difficult when the device is owned by the employee. Does the enterprise have the right to wipe it and permanently delete any personal data, images, messages, videos, etc.? We'll get back to BYOD deletions in a moment. But for corporate devices, the deletion would seem to be much easier. And yet, it's not. Many companies encourage employees to not use the corporate mobile device for anything other than work, but few put it in writing and stress that the company may have to obliterate everything on the phone in the case of a perceived security emergency — and insist that it be signed before the phone is distributed.

Digital Distancing with Microsegmentation

Microsegmentation improves data center security by controlling the network traffic into and out of a network connection. Ultimately, the goal of microsegmentation is to implement Zero Trust. Done properly, microsegmentation is effectively a whitelist for network traffic. This means that systems on any given network can strictly communicate with the specific systems they need to communicate with, in the manner they are supposed to communicate, and nothing else. With connections and communications so regimented, microsegmentation is among the best protections we have today against lateral compromise. This allows microsegmentation administrators to protect whatever is on the other end of that network connection from whatever else is on the network. It also allows everything else on the network to receive a basic level of protection from whatever might be on the other end of that network connection. This is a huge change from the "eggshell computing" model in which all defenses are concentrated at the perimeter (the eggshell) but everything behind that edge is wide open (the soft insides of the egg). 

Cisco Throws Its Weight Behind SASE

SASE represents an opportunity to put more of Cisco’s networking and security services in the cloud, said Jeff Reed, SVP of product for Cisco’s security business group. Cisco’s SASE offering will tie together elements of its networking, security, and zero-trust product lines. This includes elements of its Viptela and Meraki SD-WAN platforms to address SASE’s WAN and routing requirements. Meanwhile, for security, the vendor will lean on Cisco Umbrella for secure web gateway, domain name system (DNS), firewall, and cloud access security broker (CASB) functionality. Finally, Cisco will integrate core elements of its zero-trust networking portfolio — which includes Duo, SD-Access, and AnyConnect — to verify identity and enhance the overall security of the offering. “We had this opportunity … to basically tie all the strength we have on the network side into the abilities and capabilities we have on the security side,” Reed said. But Reed emphasizes that Cisco won’t be “lifting and shifting” existing constructs and running them in the cloud. Cisco is fully embracing the cloud-native underpinnings of SASE, he said. “We’re doing cloud native, so we’re not just lifting and shifting our virtual firewall in the cloud.”

Compare a product vs. project mindset for software development

Enterprises have started to recognize the danger of a project mindset, namely, that everyone focuses less on the product. "A perfect project management system can complete every task ... in a vacuum, with amazing results -- and still fail when it comes time to go to market," said Alexander M. Kehoe, operations director at Caveni Digital Solutions, a web design consultancy. Apple has applied both project and product mindsets. Apple's iPhone innovation enabled it to grow into one of the largest companies in the world. However, critics accuse Apple of releasing a nearly carbon-copy iPhone each year. According to these critics, product quality for these phones has stagnated, as Apple finishes projects with little or no consideration on the product side. Because of this reliance on project-oriented thinking, Kehoe said, the next major mobile phone innovation might not come from Apple. If another company takes the lead in mobile phone innovation, Apple might see its market dominance evaporate overnight, he said.

Report: Debugging Efforts Cost Companies $61B Annually

The report also notes software engineers spend on average of 13 hours to fix a single software failure. According to the report, 41% said identified reproducing a bug as the biggest barrier to finding and fixing bugs faster, followed by wiring tests (23%) and actually fixing the bug (23%). Well over half (56%) said they could release software one to two days faster if reproducing failures were not an issue. Just over a quarter of developer time (26%) is spent reproducing and fixing failing tests. On the plus side, 88% of respondents said their organizations have adopted continuous integration (CI) practices, with more than 50% of businesses reporting they can deploy new code changes and updates at least daily. Over a third (35%) said they can make hourly deployments. Undo CEO Barry Morris said the report makes it clear organizations need to be able to record software to reduce the amount of time it takes to find bugs. Unfortunately, even then finding a bug is still a labor-intensive process that can involve analyzing millions of lines of code.

Using Cloud AI for Sentiment Analysis

Natural Language Toolkit (NLTK) is a powerful Python library for natural language processing (NLP) and machine learning. Popular cloud services offer some alternative NLP tools that use the same underlying concepts as NLTK. ... If you've followed through the NLP sentiment analysis articles we started in Introducing NLTK for Natural Language Processing, you've seen one established approach. The following overviews will show you what the interface and response look like for sentiment analysis on these cloud services. In many cases it's very similar to NLTK, just using the horsepower of someone else's computers. Amazon Web Services (AWS) provides an Amazon Comprehend NLP service that includes a range of features analogous to some of what you’ll find in NLTK. Similar to NLTK’s pos_tag, the AWS service can identify parts of speech (POS) and tag them as proper names, places, and locations, and so forth. It has support for 100 languages that can be detected in unstructured text, and includes text summarization capabilities to identify and extract key phrases that contribute to the overall meaning of a given piece of text.

Quote for the day:

"If you're not prepared to be wrong, you'll never come up with anything original." -- Sir Ken Robinson

Daily Tech Digest - May 28, 2020

Analysis by researchers at cybersecurity company Tessian reveals that 52 percent of employees believe they can get away with riskier behaviour when working from home, such as sharing confidential files via email instead of more trusted mechanisms. ... In some cases, employees aren't purposefully ignoring security practices, but distractions while working from home such as childcare, room-mates and not having a desk set-up like they would at the office are having an impact on how people operate. Meanwhile, some employees say they're being forced to cut security corners because they're under pressure to get work done quickly. Half of those surveyed said they've had to find workarounds for security policies in order to efficiently do the work they're required to do – suggesting that in some cases, security policies are too much of a barrier for employees working from home to adapt to. However, by adopting workarounds employees could be putting their organisation at risk from cyber attacks, especially as hackers increasingly turn their attention to remote workers. "But, all it takes is one misdirected email, incorrectly stored data file, or weak password, before a business faces a severe data breach that results in the wrath of regulations and financial turmoil".

Google, Microsoft most spoofed brands in latest phishing attacks

In form-based phishing attacks, scammers leverage sites such as Google Docs and Microsoft Sway to trap victims into revealing their login credentials. The initial phishing email typically contains a link to one of these legitimate sites, which is why these attacks can be difficult to detect and prevent. Among the nearly 100,000 form-based attacks that Barracuda detected over the first four months of 2020, Google file sharing and storage sites were used in 65% of them. These attacks included such sites as storage.googleapis.com, docs.google.com, storage.cloud.google.com, and drive.google.com. Microsoft brands were spoofed in 13% of the attacks, exploiting such sites as onedrive.live.com, sway.office.com, and forms.office.com. Beyond Google and Microsoft, other sites spoofed in these attacks were sendgrid.net, mailchimp.com, and formcrafts.com. ... criminals try to spoof emails that seem to have been creating automatically through file sharing sites such as Microsoft OneDrive. The emails contain links that take users to a legitimate site such as sway.office.com. But that site then leads the victim to a phishing page prompting for login credentials.

Four ways to reflect that help boost performance

On top of a mountain, a leader retreats to ask him or herself a set of questions about life, stress, and sacrifice, capturing the answers in a beautifully bound notebook. The questions don’t vary much. Where are you going? How are you living your values? What gives you meaning, purpose, or fulfillment? Are all the components of your life managed as you need them to be managed: career, family, friends, finances, health, and spiritual growth? The power of this reflection comes from digging deep and being in touch with your core. It is very much an affair of the heart. With the insights from this exercise, you come back to your role renewed, focused on what matters to you and clearer about how you will lead this year. Although this kind of deep reflection is a useful process, it may not be enough to tackle the range of problems a business encounters in the course of a year because it focuses solely on the leader. In our experience working together and independently coaching leaders, we find that they and their teams benefit from four ways of more targeted reflection that help refocus and reframe challenges

IT Staffing Guide

After taking the time to write out your job description and put it out there on as many job boards as possible, you can only hope and pray that the right candidate finds you. Meanwhile, your organization loses time and money while operating with less than full staff and taking time away from work to conduct interviews that may or may not lead to a successful hire. In the best-case scenario, you find someone great, and you are just out the original time and money. In the worst-case scenario, time drags on, and no one who is right for the position ever applies, or you hire someone, and it doesn’t work out, hopefully only once. A thriving, growing company just does not have time for this every time they need to add to the team. In short, IT staffing agencies will save your company both time and money. IT staffing agencies take the time to get to know the needs of both the company and the potential employees and takes the time to match the two in both technical and cultural aspects.

Flutter: Reusable Widgets

Most of the time, we are duplicating so many widgets just for a little change. What could be the best possible way to get rid of these things? It’s creating Reusable Widgets. It’s always good practice to use Reusable Widgets to maintain consistency throughout the app. When we are dealing with multiple projects, we don’t like to write to each code multiple times. It will create duplication and in the end, if any issue comes we end up with a mess. So, the best way is to create a base widget and use it everywhere. You can modify it based on your requirement and another advantage is if any change comes then you need to do it in one place and it’ll be reflected everywhere. ... Try to code less business logic inside a UI widget. All the communication between the user and UI should be done via events. So, if there is a need to use the same widget in another project, you can do it quickly. ... Access data via callbacks is the best possible way to separate your View part from business logic(Just like View and ViewModel).

The mobile testing gotchas you need to know about

The mobile testing gotchas you need to know about
If you’re dealing with a native mobile application, you can find yourself in the wild west. It’s not so bad on iOS, where current OS support is available for devices several years old, but in the Android world, the majority of currently active devices are running versions four or five years old. This presents a huge challenge for testing. In my group, we’re lucky enough to only deliver on iPads, and we set a policy of only supporting the currently shipping version of iOS and one major release back. But if you are trying to be more inclusive or are stuck supporting the much more heterogeneous Android ecology, you have to do a lot of testing across multiple devices and OS versions. You can’t even get away with testing on a lowest common denominator release. Your dev team is probably conditionally taking advantage of new OS features, such as detecting which OS version the device is running and using more modern features when available. As a result, you have to test against pretty much every version of the OS you need to support.

Fujitsu delivers exascale supercomputer that you can soon buy

supercomputer / servers / data center / network
Fujitsu announced last November a partnership with Cray, an HPE company, to sell Cray-branded supercomputers with the custom processor used in Fugaku. Cray already has deployed four systems for early evaluation located at Stony Brook University, Oak Ridge National Laboratory, Los Alamos National Laboratory, and the University of Bristol in Britain. According to Cray, systems have been shipped to customers interested in early evaluation, and it is planning to officially launch the A64fx system featuring the Cray Programming Environment later this summer. Fugaku is remarkable in that it contains no GPUs but instead uses a custom-built Arm processor designed entirely for high-performance computing. The motherboard has no memory slots; the memory is on the CPU die. If you look at the Top500 list now and proposed exaFLOP computers planned by the Department of Energy, they all use power-hungry GPUs. As a result, Fugaku prototype topped the Green500 ranking last fall as the most energy efficient supercomputer in the world. Nvidia’s new Ampere A100 GPU may best the A64fx in performance but with its 400-watt power draw it will use a lot more power.

Use of cloud collaboration tools surges and so do attacks

Cloud security threats  >  Lightning strikes a digital landscape via binary clouds.
The use rate of certain collaboration and videoconferencing tools has been particularly high. Cisco Webex usage has increased by 600%, Zoom by 350%, Microsoft Teams by 300% and Slack by 200%. Again, manufacturing and education ranked at the top. While this rise in the adoption of cloud services is understandable and, some would argue, a good thing for productivity in light of the forced work-from-home situation, it has also introduced security risks. McAfee's data shows that traffic from unmanaged devices to enterprise cloud accounts doubled. "There's no way to recover sensitive data from an unmanaged device, so this increased access could result in data loss events if security teams aren't controlling cloud access by device type." Attackers have taken notice of this rapid adoption of cloud services and are trying to exploit the situation. According to McAfee, the number of external threats targeting cloud services increased by 630% over the same period, with the greatest concentration on collaboration platforms.

Analytics critical to decisions about how to return to work

As offices begin to reopenamid the COVID-19 crisis, decisions will have to be made in order to limit the potential spread of the virus.
"There's a couple of things, and one is understanding your performance," Menninger said. "That's a key aspect of analytics -- understanding your current performance, extrapolating from that performance, planning and looking forward with that information -- and finding some patterns in the past that perhaps might be useful." Doing an internal analysis can also help an organization find ways to cut costs it may not have taken advantage of in the past. Trimming costs, meanwhile, is something many enterprises don't do when the economy is more stable and their profits more predictable, but economic uncertainty forces organizations to more closely examine their spending, said Mike Palmer, CEO of analytics startup Sigma Computing. "One thing to look at is how to optimize the business -- where do I have efficiencies that I can gain, how many do I have?" Palmer said. "There are so many questions that the average company doesn't effectively answer in good times because they don't focus on optimization."

Machine Learning in Java With Amazon Deep Java Library

Machine Learning in Java With Amazon Deep Java Library
Interest in machine learning has grown steadily over recent years. Specifically, enterprises now use machine learning for image recognition in a wide variety of use cases. There are applications in the automotive industry, healthcare, security, retail, automated product tracking in warehouses, farming and agriculture, food recognition and even real-time translation by pointing your phone’s camera. Thanks to machine learning and visual recognition, machines can detect cancer and COVID-19 in MRIs and CT scans. Today, many of these solutions are primarily developed in Python using open source and proprietary ML toolkits, each with their own APIs. Despite Java's popularity in enterprises, there aren’t any standards to develop machine learning applications in Java. ... One of these implementations is based on Deep Java Library (DJL), an open source library developed by Amazon to build machine learning in Java. DJL offers hooks to popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch by bundling requisite image processing routines, making it a flexible and simple choice for JSR-381 users.

Quote for the day:

"It is one thing to rouse the passion of a people, and quite another to lead them." -- Ron Suskind

Daily Tech Digest - May 27, 2020

Enterprises look to SASE to bolster security for remote workers

access control / authentication / privileges / security / key"Companies that were on the fence about whether to upgrade to SASE, they're falling over to the 'adopt now' side," says Zeus Kerravala, founder and principal analyst at ZK Research. "If I'm trying to move to a modernized application infrastructure, why am I still using a network architecture designed for client-server from 30 years ago? A lot of my apps are now in the cloud, I've got people working from everywhere. This transition would have happened with or without the pandemic, but the pandemic has accelerated it." While it's too early to tell if adoption spikes will continue after the pandemic abates, individual SASE vendors are reporting dramatic changes so far. Versa Networks, for example, saw remote user traffic increase by 800% to 900% since the pandemic hit. "Around March 22 is when we began to see these stats appear at this level," says Mike Wood, Versa Networks' CMO. Sanjay Uppal, senior vice president and general manager of the VeloCloud business unit at VMware, says that use of the company's SASE network has gone up five-fold since the pandemic hit.

In the communication space, UCaaS is probably the best-known term in cloud communications. When the as-a-service offering arrived, providing access to flexible communications in the cloud, UCaaS was one of the first ways that businesses saw the benefits of this new scalability. In the UCaaS Magic Quadrant, Gartner devices UCaaS as something that can combine the critical factors for communication into a single space. Unlike UC that concentrates heavily on on-premise hardware, UCaaS is more focused on cloud-based services delivered over the internet. ... CPaaS, on the other hand, is very similar to UCaaS, but it delivers a different kind of experience. Just like UCaaS, your technology is delivered over the cloud, and often on a pay-monthly subscription service. However, while UCaaS delivers the entire communication platform to your team in one go, CPaaS allows business owners to develop the solution that suits them. For instance, you might add video collaboration, instant messaging, and voice calls to the technologies that you already use in your landscape. This is possible through the use of sample codes, Rest APIs, developer forums and in-depth documentation. Some companies even offer their own software development kits that are specifically for CPaaS use

 “The move to widespread remote working has required many industries to adopt new cloud services in order to maintain staff communication and collaboration during such a challenging time,” said Nigel Hawthorn, data privacy expert for cloud security at McAfee. “However, it is important to recognise the increased threat from cyber criminals who see opportunity in cloud services that are not managed securely. “Cloud and data security should be absolutely front and centre in informing any enterprise’s cyber security approach – even more so when they are increasingly reliant on the cloud. Without ascertaining where sensitive data resides or how it is used and shared, it is simply impossible for organisations to have an accurate picture of their security posture and where any vulnerabilities may be.” Hawthorn said it was crucial for organisations to recognise their role within the shared responsibility model, making everyone accountable for cyber security, from enterprise IT teams, to managed service providers accessing their networks, down to individual employees.

Rebuilding our broken economies starts with market-level collaboration image
Over the course of its history, the IT industry has pursued a relentless march to optimise the affairs of individual firms, often creating massive inefficiencies and standing in the way of progress for industries as a whole. But in their defence, software vendors have only responded to how firms within markets operate, providing solutions that fit their customers’ fear of sharing valuable data. There is an unspoken invisible line at the boundary of the firm and the market in which it operates that, until now, enterprise software has rarely been able to cross. Gaining market-level optimisation has been unthinkable without also ceding unpalatable levels of control and power to a vendor. So, even when an opportunity to pursue amazing new efficiencies through pooling the operations of an entire market into a centralised shared service arises, it’s extremely hard to justify taking the plunge.

Life in lockdown: Chiara Zuddas, 31, works on her laptop at home in San Fiorano, one of the original 'red zone' towns in northern Italy that have been on lockdown since February, in this picture taken by her husband, schoolteacher Marzio Toniolo, March 27, 2020. Toniolo has been documenting what life has been like for his family since quarantine began for them weeks before the rest of the country. Picture taken March 27, 2020. Marzio Toniolo/via REUTERS THIS IMAGE HAS BEEN SUPPLIED BY A THIRD PARTY. MANDATORY CREDIT - RC2EUF900M5P
Firstly, be aware that working from home represents much more than a change of location. It involves a profound shift in mindset and behaviour. With teams dispersed, we can no longer just turn to the side to check our thinking with a colleague. Instead, we make more decisions in isolation, and this can make us more vulnerable. We are also becoming more used to interacting with certain contacts only via email, which may raise the risk of impersonation and identity theft. In addition, the crisis itself is affecting the way we think. During times of stress and upheaval, humans tend to respond more instinctively and less rationally. Over the past few weeks, many of us have been forced to make instant decisions amid constant change. Such fast thinking has its place, but it can stop us from considering certain situations carefully and rationally and choosing the best way ahead. Finally, the threat of potential hackers is adding yet another source of stress.

"Microsoft was founded on the principle that software was intellectual property," Sinofsky says, making distinctions between the various approaches to software and hardware adopted by Microsoft, IBM, Google, and Apple. He points to the the Altair BASIC interpreter, the first product from Bill Gates and fellow Microsoft co-founder Paul Allen, which they created in the 1970s for hobbyists to program in BASIC on bare metal. Incidentally, Microsoft open-sourced the 1983 GW-BASIC interpreter last week as a historical software artifact. "Times were different when Microsoft started," Sinofsky writes. "There was no network distribution. In fact it cost money (COGS) to distribute software," he said, referring to the additional cost of distributing software compared with the way Google distributes its ad-backed software in the cloud, how Apple ties its software to hardware, and how IBM coupled its software with consultancy fees.

F-Secure’s research teams examined multiple devices, including, but not limited to, the Huawei Mate 9 Pro, the Samsung Galaxy S9 and the Xiaomi Mi 9. They found that the exploitation processes for Android vulnerabilities and configuration varied from device to device, which is important because it implies that devices sold globally offer different levels of security to users located in different countries. More concerningly, the level of security a user receives ultimately depends on the way the supplier configures the device – so two people in different countries can buy the same basic device, but one will be substantially more insecure than the other. “Devices that share the same brand are assumed to run the same, irrespective of where you are in the world,” said James Loureiro, UK research director at F-Secure Consulting. “However, the customisation done by third-party vendors such as Samsung, Huawei and Xiaomi can leave these devices with significantly poor security, dependent on what region a device is set up in or the SIM card inside of it.

Technically applying security with Spring Security in Spring applications is simple. You already implement Spring applications so you know that the framework's philosophy starts with the management of the Spring context. You define beans in the Spring context to allow the framework to manage them based on configurations you specify. And let me refer only to using annotations to make these configurations and leave behind the old-fashioned XML configuration style! You can use annotations to instruct Spring what to do: expose endpoints, wrap methods in transactions, intercept methods in aspects, and so on. Also, you'd like to apply security configurations. This is where Spring Security comes into action. What you want is to use annotations, beans, and in general Spring-fashioned configuration style to define your application-level security. If you think of a Spring application, the behavior that you need to protect is defined by methods.

While smart cities can offer unprecedented levels of convenience to improve our everyday lives they also rely on vast networks of data, including personal customer information to predict our preferences. This has led to concerns around the high levels of data used and stored by smart systems, and the security provided to our digital identity. We know that existing personal and unique identifiers, such as passwords and PINs are no longer secure enough to protect our systems, and this is even more important in hyper-connected cities as, once a city becomes ‘smart’ the inter-connected networks widen, and the potential for cyberattacks or data breaches grows. So as this trend continues, how can we develop smart cities that are both convenient and secure? To resolve this, providers of smart city networks need to establish a chain of trust in their technology. This is a process common in cybersecurity, where each component in a network is validated by a secure root. In wide connected networks, this is vital to protect sensitive personal or business data and ensure consumer trust in the whole system. Therefore, a biometric digital identity should sit at the root of that chain of trust in smart city networks.

There is nothing wrong with monolithic apps in general if the different business functions they support are closely related to each other and they all need to be called in the same transactional context. These different functions also should have the same lifecycle in terms of enhancements and production deployments. But if an application or system needs to support business functions that are not closely related to each other, have different lifecycles of changes, or have different performance and scalability needs, then monolithic applications become a challenge. Application development and support start becoming overhead and a burden when the business needs change at different paces or in different parts of the system. Having a single app responsible for multiple business functions means that anytime we need to deploy enhancements or a new version of a specific function, we must shut down the whole application, apply the new feature, and restart the application.

Quote for the day:

"Each day you are leading by example. Whether you realize it or not or whether it's positive or negative, you are influencing those around you." -- Rob Liano

Daily Tech Digest - May 26, 2020

Real Time Matters in Endpoint Protection

istock 1048305600
And the problem is pervasive. According to a report from IDC, 70% of all successful network breaches start on endpoint devices. The astonishing number of exploitable operating system and application vulnerabilities makes endpoints an irresistible target for cybercriminals. They are not just desirable because of the resources residing on those devices, but also because they are an effective entryway for taking down an entire network. While most CISOs agree that prevention is important, they also understand that 100% effectiveness over time is simply not realistic. In even the most conscientious security hygiene practice, patching occurs in batches rather than as soon as a new patch is released. Security updates often trail behind threat outbreaks, especially those from malware campaigns as opposed to variants of existing threats. And there will always be that one person who can’t resist clicking on a malicious email attachment. Rather than consigning themselves to defeat, however, security teams need to adjust their security paradigm. When an organization begins to operate under the assumption that every endpoint device may already be compromised, defense strategies become clearer, and things like zero trust network access and real time detection and defusing of suspicious processes become table stakes.

The Problem with Artificial Intelligence in Security

There is a lot of promise for machine learning to augment tasks that security teams must undertake — as long as the need for both data and subject matter experts are acknowledged. Rather than talking about "AI solving a skill shortage," we should be thinking of AI as enhancing or assisting with the activities that people are already performing. So, how can CISOs best take advantage of the latest advances in machine learning, as its usage in security tooling increases, without being taken in by the hype? The key is to come with a very critical eye. Consider in detail what type of impact you want to have by employing ML and where in your overall security process you want this to be. Do you want to find "more bad" or do you want to help prevent user error or one of the other many possible applications? This choice will point you toward different solutions. You should ensure that the trade-offs of any ML algorithm employed in these solutions are abundantly clear to you, which is possible without needing to understand the finer points of the math under the hood.

Strategy never comes into existence fully formed. Today, for example, we know that part of Ikea’s strategy is to produce low-cost furniture for growing families. We also know that, behind the scenes, Ikea innovates with its products and supply chain. Once upon a time, the founder of Ikea did not sit at his kitchen table to create this strategy. And he absolutely did not use a Five Forces template or a business-model canvas. What happened was that, once the business had started and as time passed, events shaped Ikea and, of course, Ikea shaped events. ... Strategy patterns form a bridge between expert strategists, those who have walked the walk, and those who are less experienced. They accelerate the production of new strategies, reduce the number of arguments that arise from uncertainty, and help groups to align on their next actions. By using patterns, those less experienced can benefit from knowledge they haven't had time to build on their own. At the same time, patterns give experienced strategists a rubric that lets them teach strategy.

Blazor Finally Complete as WebAssembly Joins Server-Side Component

Blazor, part of the ASP.NET development platform for web apps, is an open source and cross-platform web UI framework for building single-page apps using .NET and C# instead of JavaScript, the traditional nearly ubiquitous go-to programming language for the web. As Daniel Roth, principal program manager, ASP.NET, said in an announcement post today, Blazor components can be hosted in different ways, server-side with Blazor Server and now client-side with Blazor WebAssembly. "In a Blazor Server app, the components run on the server using .NET Core. All UI interactions and updates are handled using a real-time WebSocket connection with the browser. Blazor Server apps are fast to load and simple to implement," he explained. "Blazor WebAssembly is now the second supported way to host your Blazor components: client-side in the browser using a WebAssembly-based .NET runtime. Blazor WebAssembly includes a proper .NET runtime implemented in WebAssembly, a standardized bytecode for the web. This .NET runtime is downloaded with your Blazor WebAssembly app and enables running normal .NET code directly in the browser."

How event-driven architecture benefits mobile UX

At its most basic, the EDA consists of three types of components: event producers, event channels and event consumers. They may be referred to by other names, but most EDA systems follow the same basic outline. The producers and consumers operate without knowledge of or dependencies on each other, making it possible to develop, deploy, scale and update the components independently. The events themselves tie the decoupled pieces together. A producer can be any application, service or device that generates events for publishing to the event channel. Producers can be mobile applications, IoT devices, server services or any other systems capable of generating events. The producer is indifferent to the services and systems that consume the event and is concerned only with passing on formatted events to the event channel. The event channel provides a communication hub for transferring events from the producers to the consumers.

Microsoft Teams Rooms: Switch to OAuth 2.0 by Oct 13 or your meetings won't work

While it is simple to set up, it exposes credentials to attackers capturing them on the network and using them on other devices. Basic Authentication is also an obstacle to adopting multi-factor authentication in Exchange Online, said Microsoft.  Microsoft intends to turn off Basic Authentication in Exchange Online for Exchange ActiveSync (EAS), POP, IMAP and Remote PowerShell on October 13, 2020. It's encouraging customers to use the OAuth 2.0 token-based 'Modern Authentication'.  After installing the Teams Room update, admins will be able to configure the product to use Modern Authentication to connect to Exchange, Teams, and Skype for Business services. This move reduces the need to send actual passwords over the network by using OAuth 2.0 tokens provided b Azure Active directory. While the change is optional until October 13, Microsoft suggests login problems could arise after the cut-off date for Microsoft Teams Rooms configured with basic authentication. "Modern authentication support for Microsoft Teams Rooms will help ensure business continuity for your devices connecting to Exchange Online," it said. But it will let customers choose when to switch to modern authentication until October 13. 

Digital Transformation without the Judgement

CEOs have to focus ruthlessly on a small number of priorities. One customer in the rail industry went for approval of an SAP S/4HANA project, and the CFO saw the 8-figure budget and asked the CIO: would you like me to approve this project, or buy one more locomotive this year? You might be thinking “buy the train,” but it’s not that simple. What if this IT project improved rail network throughput by 2%, or decreased the chances of a derailment by 10%? What if it provided efficiencies in cargo prioritisation that meant two fewer locomotives needed to be in service? What are your priorities? How might they be achieved by IT investments? Today’s new hires are the Instagram generation. They primarily share images on Social Media, not diatribes about their personal life. Tomorrow’s new hires will be the Snapchat and TikTok generation, and before we know it, there will be a generation of employees who have never used a laptop. That might be an exaggeration, but the new generation of workers expect to have an excellent user experience for the tools they use in the workplace. If you want to hire the best talent, you are going to need to think about their needs.

Introducing Project Tye

Project Tye is an experimental developer tool that makes developing, testing, and deploying microservices and distributed applications easier. When building an app made up of multiple projects, you often want to run more than one at a time, such as a website that communicates with a backend API or several services all communicating with each other. Today, this can be difficult to setup and not as smooth as it could be, and it’s only the very first step in trying to get started with something like building out a distributed application. Once you have an inner-loop experience there is then a, sometimes steep, learning curve to get your distributed app onto a platform such as Kubernetes. ... If you have an app that talks to a database, or an app that is made up of a couple of different processes that communicate with each other, then we think Tye will help ease some of the common pain points you’ve experienced.

Containers as an enabler of AI

Containers as an enabler of AI header
The use of containers can greatly accelerate the development of machine learning models. Containerized development environments can be provisioned in minutes, while traditional VM or bare-metal environments can take weeks or months. Data processing and feature extraction are a key part of the ML lifecycle. The use of containerized development environments makes it easy to spin up clusters when needed and spin them back down when done. During the training phase, containers provide the flexibility to create distributed training environments across multiple host servers, allowing for better utilization of infrastructure resources. And once they're trained, models can be hosted as container endpoints and deployed either on premises, in the public cloud, or at the edge of the network. These endpoints can be scaled up or down to meet demand, thus providing the reliability and performance required for these deployments. For example, if you're serving a retail website with a recommendation engine, you can add more containers to spin up additional instances of the model as more users start accessing the website.

Google Open-Sources AI for Using Tabular Data to Answer Natural Language Questions

Co-creator Thomas Müller gave an overview of the work in a recent blog post. Given a table of numeric data, such as sports results or financial statistics, TAPAS is designed to answer natural-language questions about facts that can be inferred from the table; for example, given a list of sports championships, TAPAS might be able to answer "which team has won the most championships?" In contrast to previous solutions to this problem, which convert natural-language queries into software query languages such as SQL, which then run on the data table, TAPAS learns to operate directly on the data and outperforms the previous models on common question-answering benchmarks: by more than 12 points on Microsoft's Sequential Question Answering (SQA) and more than 4 points on Stanford's WikiTableQuestions (WTQ). Many previous AI systems solve the problem of answering questions from tabular data with an approach called semantic parsing, which converts the natural-language question into a "logical form"---essentially translating human language into programming language statements.

Quote for the day:

"Leadership is not a solo sport; if you lead alone, you are not leading." -- D.A. Blankinship