Given that financial institutions are custodians of significant amounts of third-party data, much of which is personal and sensitive, it is imperative now more than ever to manage and assess the risks and their impact on the existing ecosystem to drive optimum value from their digital initiatives. The risks are indeed multiplied where data is involved. With the ubiquity of online banking apps and services, the likelihood of a breach is almost certain at some point and that is when banks must be prepared. As the cadence of cyberattacks increase, organisations can no longer hide internal dysfunction from external stakeholders. “[When] an inevitable breach, audit or Royal Commission happens, financial institutions will only survive the exposure if they can show that they have actually taken all reasonable steps to protect themselves,” Greaves said. Being in control of the high-risk data must be the first step in mitigating these risks. “The key to treating information risk is to have full control of that information. If an institution is unfamiliar with what data it has, who is doing what to it, and where and how it is stored within its systems, it will be unable to control it or protect it,” Greaves said.
Predicting the nature of future jobs is, of course, difficult or impossible to do with precision. And even if predictions are possible, they will probably differ substantially from job to job. Nevertheless, some companies are embarking on approaches that predict the future of either all jobs in the organization, those that are particularly likely to be affected by AI, or jobs that are closely tied to future strategies. ... Some companies are making specific job predictions based on their strategies or products. In Europe, a consortium of microelectronics companies is devoting 2 billion euros to train current and future employees on electronic components and systems. General Motors is focused on training its employees to manufacture electric and autonomous vehicles. Verizon is focused on hiring and training data scientists and marketers to expand its 5G wireless technology. SAP is focused on growing employees’ skills in cloud computing, artificial intelligence development, blockchain, and the internet of things. The raging bull of machine learning has turned out to be slower and calmer than many people predicted a few years ago. But any rancher knows you should never turn your back on a bull, no matter how docile it seems.
SDxI must understand the appropriate context of users, applications, devices, and locations related to the creation of a virtual machine, container, or even a data flow or set of network attributes such as source/destination addresses and tags. Advanced infrastructure needs to be able to provide data gathering on context-relevant metrics for debugging, security and audit, performance management, and billing and marketing. Historically, context-awareness was the purview of specialized point products such as networking devices (primarily Layers 4-7) that directed and processed traffic based on rules and inspecting incoming data. But this processing only occurs at specific points in the infrastructure. SDxI applications are more demanding and need holistic context-awareness across networking, compute, and storage to optimize workload placement in context of what a user, device, or app is trying to accomplish. For example, efforts are underway to add contextual-driven automation to both private and public cloud environments via OpenStack Heat. In this model, external context-based triggers drive VMs and their computer, storage, and network resources to spin up or down to maximize performance, minimize latency, or meet appropriate business objectives.
While digital banking has been around long before the pandemic, it spiked in usage amidst the pandemic. Research shows that about 50 per cent of consumers are using digital banking products more since the pandemic, with 87 per cent of them planning to continue this increased usage after the pandemic. This shows that digital banking has evolved from a “nice-to-have” to a “must-have” solution for consumers and businesses. However, despite the convenience in use that digital banking offers, many consumers are still weary of the dangers that digital banking solutions bring. ... Just as self-service solutions have become rampant during the pandemic to avoid possible infection, autonomous finance is expected to rise in 2021 as well. Several fintech solutions today make it possible for people to manage their money, open accounts, apply for loans, and more with just a click of a button. Thanks to AI and machine learning, these solutions are now more accessible than lining up in traditional banks and going through tedious processes. ... Bitcoin’s rising price is due to various reasons, some of which include growing institutional interest, usage as a hedge against inflation, and PayPal’s official entrance in the crypto scene.
Successful cross-enterprise data strategies bring a unified approach to data integration, quality, governance, and data sharing. Innovation is not through a set of siloed products. It is a single platform that moves and manages different types of data under one roof. To create a successful data management strategy and avoid any data security mishaps, chief data officers (CDOs) and their teams should start by setting up governance and establishing business rules and system controls for access. CDOs report the most success when their data sharing architecture is built on microservices that answer business questions. That is, what data is needed to provide insights into the most difficult business problems? For example, the CDO of a large Internet-based home furnishing company recently shared that when they treat data integration as a business transformation project, they receive better requirements about business needs, data security and data trust, more focus from stakeholders, and broader adoption across the organization and within roles. Another best practice approach that both encourages sharing while also only labeling trusted, vetted data sources is the concept of certified versus uncertified data sets.
The key principle underlying these two natural methods, neither of which requires extra hyperparameters, is that the training behavior of a factorized model should mimic that of the original (unfactorized) network. We further demonstrate the usefulness of these schemes in two settings beyond model compression where factorized neural layers are applied. The first is an exciting new area of knowledge distillation in which an overcomplete factorization is used to replace the complicated and expensive student-teacher training phase with a single matrix multiplication at each layer. The second is for training Transformer-based architectures such as BERT, which are popular models for learning over sequences like text and genomic data and whose multi-head self-attention mechanisms are also factorized neural layers. Our work is part of Microsoft Research New England’s AutoML research efforts, which seek to make the exploration and deployment of state-of-the-art machine learning easier through the development of models that help automate the complex processes involved.
Barring some of India’s major cities, good Internet connectivity is still the stuff of dreams. Fighting bugs while your strongest warrior is out cold due to poor connectivity is every CTO’s worst nightmare. It’s just not about dire circumstances though; many young developers live in shared accommodations without a personal space to focus on work. Remote work can succeed only with the implicit understanding that work time at home is as focused as work time in the office. In addition to poor Internet, the lack of facilities such as a good work desk, and a well-lit room also hamper productivity. One of the prime reasons why developers are aching to come back to office is because coding while in bed and in your PJs has an early expiry date. Then there’s connection. Indian workplaces have traditionally depended more on verbal communication than written documentation. We’d rather walk up to someone and provide feedback than write it up in precise points in an email. With remote work, both developers and managers need to adopt a different cadence of verbal and written communication that is direct and constructive.
Boards can only be effective if they have the ability to come to a consensus. No one wants to feel that the board is made up of factions with irreconcilable differences. Even when the board undergoes a shake-up, like the addition of an activist director, they tend to quickly reach a new equilibrium. But while consensus-building is important, boards may be too inclined to seek harmony or conformity. This can lead to groupthink, where dissenting views are not welcomed or entertained. In fact, while most boards work to solicit a range of views and come to a consensus on key issues, 36% of directors say it is difficult to voice a dissenting view on at least one topic in the boardroom. This can point to dysfunctional decision-making as the board members avoid making waves. In fact, the most common reason that directors cite for stifled dissent on their boards is the desire to maintain collegiality among their peers. Groupthink is also magnified when the board is not effectively educated on a topic, or does not have access to the right information. Board materials may come too late for members to have any real time to review and reflect on the information before a meeting.
Hackland recognises that it can be difficult for CIOs to gain funding for innovative projects, especially in organisations with competing priorities. But when there's a chance to try something new, the opportunity must be grabbed – not just in terms of the potential benefits it might bring to the company itself but also in terms of professional development. "You're learning and your people are learning," says Hackland, referring to the importance of experimentation. "They're engaged in something new, they're not just doing lights-on, which I think is really important. They're getting to play with new technologies." Which brings us back to Williams' recent foray into virtual reality, which was one such attempt to try something new. The intention was to allow users of a bespoke VR app to view and manipulate the new car in its livery in 3D. The app, which was created by an external agency, was made available for fans to download on the Apple App Store and Google Play Store. However, when pictures of the FW43B started appearing online, the team couldn't be sure if only the image data for the new car had been unpacked or whether the app itself had been compromised.
At its core, platform engineering is all about building, well, a platform. In this context, I mean an internal platform within an organisation, not a general business platform for external consumers. This platform serves as a foundation for other engineering teams building products and systems on top of it for end users. Concrete goals include: Improving developer productivity and efficiency - through things like tooling, automation and infrastructure-as-code; Providing consistency and confidence around complex cross cutting areas of concerns - such as security and reliable auto scaling; Helping organisations to grow teams in a sustainable manner to meet increased business demands. Matthew Skelton concisely defines a platform as “a curated experience for engineers (the customers of the platform)”. This phrase “curated experience” very nicely encapsulates the essence of what I have come to recognise and appreciate as being a crucial differentiator for successful platforms. Namely, it’s not just about one technology solving all your problems. Nor is it about creating a wrapper around a bunch of tech.
Quote for the day:
“Nobody talks of entrepreneurship as survival, but that’s exactly what it is.” -- Anita Roddick