Data science as a discipline was poorly understood, and most organizations had not yet implemented a data strategy aligned with their business objectives. Therefore, the first wave of data scientists had the time, training and support of the business to experiment and explore possibilities, just as Patil and Davenport had recommended. “Experimentation” is not scoped to any specific strategic priority, however. In practice, data science was science—a pursuit of knowledge. Real-world applications would have to wait. The consequence is that AI as a concept matured but AI in practice faltered. Over time, data science divisions moved further from the business strategy they were supposed to support. Silos emerged between business and technical units. Small successes were celebrated and held up as indicators that the process was working. But scaling them proved difficult. Executives, unsure why the whole process isn’t automated, continue to invest in people and technology to try to narrow the gap. The problem they face isn’t technological, though. It’s cultural. The goal of a company is not to set up a robust data environment; it’s to build, use and sell data products.
So, if there is an acknowledgment that testing is important and a fear that failing to test software could lead to job losses, the obvious question is why is software not tested properly? This often comes down to businesses not thinking there is a viable, cost-effective option and choosing speed over stability. However, there are more specifics we can unpack. When asked why their software wasn’t tested properly before being released, CEOs and testers in the same Censuswide survey cited a few primary reasons. The first is a reliance on manual testing, which is time and resource intensive, so therefore often skipped or rushed. This is compounded by the feeling that development cycles need to be quicker to compete in a crowded market. The next most prominent reasons were a lack of skilled developers available to conduct testing, or a lack of investment in training and development to upskill those already on the team. ... There needs to be a transition from manual testing towards automation to meet the testing requirements of increasingly complex software, with businesses struggling to scale their chosen solutions and leverage existing skills across Quality Assurance departments.
Disruptive technologies like AI, blockchain or metaverse herald new value and wealth creation possibilities for many investors and technologists. But then there is a much larger subset of humanity, people for whom the ascendance of these new machines lives as an existential threat. Might the housekeeping robot one day get fed up with serving the morning coffee and turn into a killer robot? From one day to the next, the robot’s owners become slaves. We are tongue-in-cheek here, but these are genuine concerns for many people. When it comes to our jobs, careers, and employment, the big questions at the back of our minds are, “Will my job become obsolete? Will I be terminated? Worse, will I be unemployable, a little pawn in a world run by a super-intelligence?” These are the ethical, moral, and practical questions in the background for which solutions have yet to be invented. ... A hidden bias that disproportionately favors one racial or age or gender group over another in crucial decisions such as hiring individuals is one thing. More chillingly, consider the impact AI bias could have in determining whether someone should be prosecuted or sentenced to prison, and perhaps even the length of their sentence.
While crypto-currency pushes blockchain technologies' limits and makes the headlines, financial services companies are known for their conservative approach to software development. That doesn't mean they've been unfriendly to Linux and open source. It's been quite the opposite. ... So it is, said Gabriele Columbro, FINOS' Executive Director, that open-source adoption is continuing "laying out the necessary building blocks for an organic, growing, and sustainable open community in the industry. While we know there is still a lot of work to do to reach full maturity, we're extremely proud of the major role that FINOS played in opening up financial services to the disruptive innovation benefits open source can deliver to this sector." Part of that work is that compared with other sectors, such as IT, science, and telecom, financial service companies lag behind in encouraging open-source contribution. Still, more than half (54%) of respondents say contributing to open source improved the quality of the software they are currently using. In addition, active participation in open source was cited as a key factor in recruiting and retaining IT talent.
The old quip that ‘if you’re not paying for the product, you are the product’ has not discouraged people from joining services such as Facebook, which has seen exponential user growth since its launch. According to Statista, 2.7 billion people use Facebook, a figure that has grown remarkably consistently since the company passed 1 billion users in 2012. However, you only have to look at the uproar that Facebook caused in January when it updated WhatsApp’s terms of service to state that data from private conversations would be used to inform ads on Facebook’s other platforms, to see the value people put on transparency. The change led to a 4,200% increase in user growth for rival app Signal. ... ForgeRock’s research suggests that Singaporeans are not averse to providing access to their data, so long as they are told upfront what it will be used for. The outcry over TraceTogether provides a lesson on the importance of transparency when talking to people about how their data is going to be used. This is only going to become more crucial in the future.
There is a blurring of boundaries between AI and the Internet of Things. While each technology has merits of its own, only when they are combined can they offer novel possibilities? Smart voice assistants like Alexa and Siri only exist because AI and the Internet of Things have come together. Why, therefore, do these two technologies complement one another so well? ... Moving on from the concept of Artificial Intelligence to Augmented Intelligence, where decisions models are blended artificial and human intelligence, where AI finds, summarizes, and collates information from across the information landscape – for example, company’s internal data sources. ... Composite AI is a new approach that generates deeper insights from any content and data by fusing different AI technologies. Knowledge graphs are much more symbolic, explicitly modeling domain knowledge and, when combined with the statistical approach of ML, create a compelling proposition. Composite AI expands the quality and scope of AI applications and, as a result, is more accurate, faster, transparent ,and understandable, and delivers better results to the user.
Imagine a world where the environment around you is as programmable as software: a world where control, customization, and automation are enmeshed in our surroundings. In this world, people can command their physical environment to meet their own needs, choosing what they see, interact with and experience. Meanwhile, businesses leverage this enhanced programmability to reinvent their operations, subsequently building and delivering new experiences for their customers. ... Leading enterprises will be at the forefront of the programmable world, tackling everything from innovating the next generation of customizable products and services, to architecting the hyper-personalized and hyper-automated experiences that shape our future world. Organizations that ignore this trend, fatigued from the promise of IoT, will struggle as the world automates around them. This will delay building the infrastructure and technology necessary to tap into this rich opportunity, and many organizations may find themselves playing catchup in a world that has already taken the next step.
While it’s true the profession is suited to logical thinkers, often with a strength in maths, this is by no means the cliché that is so often represented. Perception is incredibly important. Young people making decisions on their future are influenced by so many factors. From more traditional sources such as teachers, careers advisors and family, through to how they perceive a job role or industry from the media they consume. While there has been heavy-handed attempts to subvert stereotypes – just think about the somewhat notorious government-backed advert depicting a ballet dancer who could retrain to work in cyber security – I do believe the overall sentiment was correct. Next year will see the launch of the cyber security occupational specialism that will form part of the Digital T Level. The qualification is aimed at 16 to 19 year olds and is equivalent to three A Levels, with a focus on developing technical and vocational skills through a mix of classroom based learning and an industry placement.
Simplifying complexity is an art form, but such an exercise can easily fall into the trap of oversimplification. And yet, through all my years of asking leaders about the X factors that separate employees, I have wondered what quality actually makes someone stand out and get that promotion. Here’s my vote: an extreme sense of accountability and ownership of the job. People with these qualities figure out how to get something done, even if the path to success is unclear. When things get tough, they don’t point fingers or throw up their hands in frustration or complain that something isn’t fair or is too hard. Ownership is not just about having a strong work ethic—it’s about having a sense of responsibility to follow through and deliver. I saw this quality firsthand in many of the reporters I worked with during my 14 years as an editor at Newsweek magazine and the New York Times. Reporting requires creativity, resourcefulness, and persistence. There were some people who I just knew would get the work done. And when I’ve interviewed business leaders about the qualities that set high performers apart, this theme of responsibility has come up often.
Responsible AI practices have not kept pace with AI adoption for various reasons. Some firms put responsible AI on hold because of legislative uncertainty and complexity, thus delaying value realization on business opportunities. Other challenges include concerns about AI’s potential for unintended consequences, lack of consensus on defining and implementing responsible AI practices, and over-reliance on tools and technology. To overcome these challenges, it’s important to understand that technology alone is insufficient to keep up with the rapidly evolving AI space. Tools, bias detection, privacy protection, and regulatory compliance can lure organizations into a false sense of confidence and security. Overly defined accountability and incentives for responsible AI practices may look good on paper but are often ineffective. Bringing multiple perspectives and a diversity of opinions to technology requires a disciplined, pragmatic approach. To adopt a responsible AI strategy, some key concepts must be kept in mind, starting with setting a strong foundation.
Quote for the day:
"Courage is the ability to execute tasks and assignments without fear or intimidation." -- Jaachynma N.E. Agu