Tech execs should be more rigorous about succession planning for one important reason: institutional memory. Tech firms generally are younger than other companies of a similar size, which partly explains why the median age of S&P 500 companies plunged to 33 years in 2018 from 85 years in 2000, according to McKinsey & Co. These enterprises clearly have accomplished a lot in their short lives, but in their haste, most have not captured their history, unlike their longer-lived peers in other sectors. Less than half of these tech firms, in fact, have formally recorded their leader’s story for posterity. That puts them at a disadvantage when, inevitably, they will be required to onboard newcomers to their C-suites. It’s best to record this history well before the intense swirl of a leadership transition begins. Crucially, it will help the incoming and future generations of leadership understand critical aspects of its track record, the lessons learned, culture and identity. It also explains why the organization has evolved as it has, what binds people together and what may trigger resistance based on previous experience. It’s as much about moving forward as looking back.
In recent years, the EU has made conscious steps towards addressing some of these issues, laying the groundwork for proper regulation for the technology. Its most recent proposals revealed plans to classify different AI applications depending on their risks. Restrictions are set to be introduced on uses of the technology that are identified as high-risk, with potential fines for violations. Fines could be up to 6pc of global turnover or €30m, depending on which is higher. But policing AI systems can be a complicated arena. Joanna J Bryson is professor of ethics and technology at the Hertie School of Governance in Berlin, whose research focuses on the impact of technology on human cooperation as well as AI and ICT governance. She is also a speaker at EmTech Europe 2021, which is currently taking place in Belfast as well as online. Bryson holds degrees in psychology and artificial intelligence from the University of Chicago, the University of Edinburgh and MIT. It was during her time at MIT in the 90s that she really started to pick up on the ethics around AI.
When we design and build a Data Platform, we always need to evaluate if automation provides enough value to compensate the team effort and time. Time is the only resource that we can not scale. We can increase the team, but the relationship between people and productivity is not direct. Sometimes when a team is very focused on the automation paradigm, people want to automate everything, even actions that only require one time or do not provide real value. ... Usually, this is not an easy decision, and it has to be evaluated by all the team. In the end, it is an ROI decision. I don't like this concept very much because it often focuses on economic costs and forgets about people and teams. Before starting any design and development, we have to analyze if there are tools available to cover our needs. As software engineers, we often want to develop our software. But, from a team or product view, we should focus our efforts on the most valuable components and features. The goal of the Data Ingestion Engine is to make it easier the data ingestion from the data source into our Data Platform providing a standard, resilient and automated ingestion layer.
With GNAP, a client can ask for multiple access tokens in one grant request (vs. multiple requests). For instance, you could request read privileges on one resource and read and write privileges on another. ... In GNAP, the requesting client declares what kinds of interactions it supports. The authorization server responds to the request with an interaction to be used to communicate with the resource owner or the resource client. These interactions are defined in the GNAP spec as first-class objects, which provides extension points for future communication. Interactions may include redirecting the browser, opening a deeplink URL in a mobile application or providing a user code to be used elsewhere. ... GNAP provides a grant identifier if the authorization server determines a grant can be continued, unlike OAuth2. In the sample below, the grant identifier, access_token.value, can be presented to the authorization server if the grant needs to be modified or continued after the initial request.
Closely related to entrepreneurship is resilience. Humans are nothing if not adaptable but embracing shifts and bouncing forward (rather than back) will require new competencies. The skill of resilience requires you to 1) stay aware of new information 2) make sense of it 3) reinvent, innovate and solve problems. Finding fresh approaches and flexing based on your insights will be fundamental to success. ... Inherent to moving forward, is the ability to believe in a positive future and focus on possibilities. When experts find fault with a lack of responsiveness, it’s often the result of a lack of imagination. The skills of being able to envision and foresee what might happen are critical to staying motivated, inspired and driven to create new beginnings. ... Success has always been about your network, but achievement in the future will depend even more on the strength of relationships. Your social capital and primary, secondary and tertiary relationships will be critical netting to offer you new learning, access to new opportunities and social support. The new skill will be the ability to build rapport—and to build it quickly and it from a distance.
Whilst there has been plenty of hype in recent years around the impact AI will have to the website design and development community, the reality is that Artificial (Design) Intelligence technology is still very-much in its infancy …and there’s a long way to go before we see web designers and developers being replaced by robots. AI-powered platforms and tools are actually making digital creatives and engineers more productive and more effective, allowing them to produce higher-quality, digital experiences at a lower-cost. The concept behind using Artificial Intelligence to create websites is quite simple: AI-powered code-completion tools are used to “make” a website on its own and then machine learning is leveraged to optimize the user interface – entirely through adaptive intelligence, with minimal human intervention. ... The power of human creativity brings with it an innate curiosity; we are always looking to challenge the status-quo and experiment with new forms and aesthetics. Creativity will always be a human endeavor.
While challenging, this requirement led to an innovation that helped the payment services provider optimize its financial operations and better understand and expand its business. ZPS collaborated with the University of Seville in Spain to build a customized cash-flow model to uncover valuable liquidity and financial planning insights. Within this guarantee-monitoring model, ZPS uses Intelligent ERP to replicate data on contract accounts receivable in near-real time to a business warehousing solution and other reporting applications. An in-memory database then processes the data, calculates key figures such as customer cash-in and factoring cash-outs, and uses these figures to determine the amounts to be guaranteed each day. Furthermore, with a live connection to its business warehousing solution, ZPS uses a cloud-based analytics solution to let employees access calculated data and consume reports through intuitive dashboards and predictive stories. By amplifying the value of its Big Data with Intelligent ERP and augmented analytics, ZPS allows a larger circle of business users to gain insights into financial KPIs, such as gross customer cash-ins or days from order.
The authors emphasize that this isn’t definitive proof that there is no connection between racial and ethnic diversity and profits—more research is needed on that front. They also note several other important caveats, including that S&P 500 companies are not a random sample of public US firms, and that their method of identifying race and ethnicity among executives (using faces and names) is likely to overestimate the number of white executives. But they criticize McKinsey’s methodology, including its metric for measuring diversity among executives. They conclude that “caution is warranted in relying on McKinsey’s findings to support the view that US publicly traded firms can deliver improved financial performance if they increase the racial/ethnic diversity of their executives.” Among the additional research that Green and Hand call for is a way to better examine whether there is any causal relationship between a firm’s diversity and its financial performance. McKinsey, by its own admission, is only looking at correlation.
With the value of data science clear in the potential of these industries, there is no reason to believe data science will be anything but a growing profession for years and years to come. AI adoption alone has skyrocketed in recent years. Now, half of all surveyed organizations say they have applied AI to fulfill at least one function, with many more intending to invest in data-driven solutions. As the accessibility and power of data become more common, so too does the need for data scientists. Now, data scientists must help businesses navigate a world of global data collection and applications. From securing business processes to meeting international data security standards to connecting new and vital patterns in business trends, data scientists are vital to the success of innumerable businesses across industries. One such measure they can be part of is setting global data security standards for various industries. Data science is still one of the sexiest jobs you can have because it increasingly means helping people and saving money.
With the cost for deep learning model training on the rise, individual researchers and small organisations are settling for pre-trained models. Today, the likes of Google or Microsoft have budgets (read:millions of dollars) for training state of the art language models. Meanwhile, efforts are underway to make the whole paradigm of training less daunting for everyone. Researchers are actively exploring ways to maximise training efficiency to make models run faster and use less memory. A common practice is to train small models until they converge and then run a compression technique lightly. Techniques like parameter pruning have already become popular for reducing redundancies without sacrificing accuracy. In pruning, redundancies in the model parameters are explored, and the uncritical yet redundant ones are removed. Identifying important training data plays a role in online and active learning. But how much of the data is superfluous? ... For instance, the capabilities of computer vision systems have improved greatly due to (a) deeper models with high complexity, (b) increased computational power and (c) availability of large-scale labeled data.
Quote for the day:
"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis