How to minimise technology risk and ensure that AI projects succeed
Organisations are using lots of different technologies and multiple processes to try and manage all this, and that’s what’s causing the delay around getting models into production and being used by the business. If we can have one platform that allows us to address all of those key areas, then the speed at which an organisation will gain value from that platform is massively increased. And to do that, you need an environment to develop the applications to the highest level of quality and internal customer satisfaction, and an environment to then consume those applications easily by the business. Sounds like the cloud, right? Well, not always. When you look at aligning AI, you also have to think about how AI is consumed across an organisation; you need a method to move it from R&D into production, but when it’s deployed, how do we actually use it? What we are hearing is that what they actually want is a hybrid development and provisioning environment, where this combination of technologies could run with no issues, no matter what your development or target environment is, such as on cloud, on-premise, or a combination.Getting a grip on basic cyber hygiene
In regard to cyber defense, basic cyber hygiene or a lack thereof, can mean the difference between a thwarted or successful cyber-attack against your organization. In the latter, the results can be catastrophic. Almost all successful cyber-attacks take advantage of conditions that could reasonably be described as “poor cyber hygiene” – not patching, poor configuration management, keeping outdated solutions in place, etc. Inevitably, poor cyber hygiene invites risks and can put the overall resilience of an organization into jeopardy. Not surprisingly, today’s security focus is on risk management: identifying risks and vulnerabilities, and eliminating and mitigating those risks where possible, to make sure your organization is adequately protected. The challenge here is that cybersecurity is often an afterthought. To improve a cybersecurity program, there needs to be a specific action plan that the entire cyber ecosystem of users, suppliers, and authorities (government, regulators, legal system, etc.) can understand and execute. That plan should have an emphasis on basic cyber hygiene and be backed up by implementation guidance, tools and services, and success measures.Get started with MLOps
Getting machine learning (ML) models into production is hard work. Depending on the level of ambition, it can be surprisingly hard, actually. In this post I’ll go over my personal thoughts (with implementation examples) on principles suitable for the journey of putting ML models into production within a regulated industry; i.e. when everything needs to be auditable, compliant and in control — a situation where a hacked together API deployed on an EC2 instance is not going to cut it. Machine Learning Operations (MLOps) refers to an approach where a combination of DevOps and software engineering is leveraged in a manner that enables deploying and maintaining ML models in production reliably and efficiently. Plenty of information can be found online discussing the conceptual ins and outs of MLOps, so instead this article will focus on being pragmatic with a lot of hands-on code etc., basically setting up a proof of concept MLOps framework based on open source tools. The final code can be found on github. At its core it is all about getting ML models into production; but what does that mean?ESB VS KAFKA
The appropriate answer to both questions is: “Yes, but….” In spite of their similarities, ESBs and stream-processing technologies such as Kafka are not so much designed for different use cases as for wholly different worlds. True, a flow of message traffic is potentially “unbounded” – e.g., an ESB might transmit messages that encapsulate the ever-changing history of an application’s state – but each of these messages is, in effect, an artifact of a world of discrete, partitioned – i.e., atomic – moments. “Message queues are always dealing in the discrete, but they also work very hard to not lose messages, not to lose data, to guarantee delivery, and to guarantee sequence and ordering in message transmits,” said Mark Madsen, an engineering fellow with Teradata. Stream-processing, by contrast, correlates with a world that is in a constant state of becoming; a world in which – as pre-Socratic philosopher Heraclitus famously put it – “everything flows.” In other words, says Madsen, using an ESB to support stream processing is roughly analogous to using a Rube Goldberg-like assembly line of buckets – as distinct to a high-pressure feed from a hose – to fill a swimming pool.A quick rundown of multi-runtime microservices architecture
A multi-runtime microservices architecture represents a two-component model that very much resembles the classic client-server relationship. However, the components that define multi-runtime microservices -- the micrologic and the mecha -- reside on the same host. Despite this, the micrologic and mecha components still operate on their own, independent runtime (hence the term "multi-runtime" microservices). The micrologic is not, strictly speaking, a component that lives among the various microservices that exist in your environment. Instead, it contains the underlying business logic needed to facilitate communication using predefined APIs and protocols. It is only liable for this core business logic, not for any logic contained within the individual microservices. The only thing it needs to interact with is the second multi-runtime microservices component -- the mecha. The mecha is a distributed, reusable and configurable component that provides off-the-shelf primitive types geared toward distributed services. The mecha uses declarative configuration to determine the desired application states and manage them, often relying on plain text formats such as JSON and YAML.Basics Of Julia Programming Language For Data Scientists
Julia is a relatively new, fast, high-level dynamic programming language. Although it is a general-purpose language and can be used to write all kinds of applications, much of its package ecosystem and features are designed for high-level numerical computing. Julia draws from various languages, from the more low-level systems programming languages like C to high-level dynamic typing languages such as Python, R and MATLAB. And this is reflected in its optional typing nature, its syntax and its features. Julia doesn’t have classes; it works around this by supporting the quick creation of custom types and methods for these types. However, these functions are not limited to the types they are created for and can have many versions, a feature called multiple dispatching. It supports direct calls to C functions without any wrapper API, for example, the struct keyword used to define custom types. And instead of defining scope based on indentation like Python, Julia uses the keyword end, much akin to MATLAB. It would be ridiculous to summarize all its features and idiosyncrasies; you can refer to the wiki or docs welcome page for a more comprehensive description of Julia.NCSC publishes smart city security guidelines
Mark Jackson, Cisco’s national cyber security advisor for the UK and Ireland, said: “The complexity of the smart cities marketplace, with multiple device manufacturers and IT providers in play, could quite easily present cyber security issues that undermine these efforts. The NCSC’s principles are one of the most sophisticated pieces of government-led guidance published in Europe to date. “The guidance set out for connected places generally aligns to cyber security best practice for enterprise environments, but also accounts for the challenges of connecting up different systems within our national critical infrastructure. “With DCMS [the Department for Digital, Culture, Media and Sport] also planning to implement legislation around smart device security, this is indicative of a broader government strategy to level up IoT security across the board. “This will enable new initiatives in the field of connected places and smart cities to gather momentum across the UK – with cyber security baked into the design and build phase. As lockdown restrictions ease and people return to workplaces and town centres, they need assurance that their digital identities and data are protected as the world around becomes more connected.What if the hybrid office isn’t real?
A shift to hybrid work means that people will be returning to the office both with varying frequencies and for a new set of reasons,” says Brian Stromquist, co-leader of the technology workplace team at the San Francisco–based architecture and design firm Gensler. “What people are missing right now are in-person collaborations and a sense of cultural connection, so the workplace of the future — one that supports hybrid work — will be weighted toward these functions.” Offices will need a way to preserve a level playing field for those working from home and those on-site. One option is to make all meetings “remote” if not everyone is physically in the same space. That’s a possibility Steve Hare, CEO of Sage Group, a large U.K. software company, suggested to strategy+business last year. According to Stromquist, maintaining the right dynamic will require investing in technologies that create and foster connections between all employees, regardless of physical location. “We’re looking at tools like virtual portals that allow remote participants to feel like they’re there in the room, privy to the interactions and side conversations that you’d experience if you were there in person,” he says.Real-time data movement is no longer a “nice to have”
Applications and systems can “publish” events to the mesh, while others can “subscribe” whatever they are interested in, irrespective of where they are deployed in the factory or data centre, or the cloud. This is essential for critical industries we rely on, such as capital markets, industry 4.0, and a functional supply chain. Indeed, there are few industries today who can do without as-it-happens updates on their systems. Businesses and consumers demand extreme responsiveness as a key part of a good customer experience, and many technologies depend on real-time updates to changes in the system. However, many existing methods for ensuring absolute control and precision of such time-sensitive logistics don’t holistically operate in real-time, at scale, without data loss, and therefore open room for fatal error. From retail, which relies on the online store being in constant communication with the warehouse and the dispatching team, to aviation, where pilots depend on real-time weather updates in order to carry the passengers to safety, today’s industries cannot afford anything other than real-time data movement. Overall, when data is enabled to move in this way, businesses can make better decisions.The Cloud Comes of Age Amid Unprecedented Change
Just look at how businesses compete. The influx of cloud technologies during the pandemic has underlined that the technology stack is a core mode of differentiation. Industry competition is now frequently a battle between technology stacks, and the decisions leaders make around their cloud foundation, cloud services and cloud-based AI and edge applications will define their success. Look at manufacturing, where companies are using predictive analytics and robotics to inch ever closer to delivering highly customized on-demand products. The pandemic has forced even the most complex supply chain operations from manufacturers to operate at the whim of changing government requirements, consumer needs and other uncontrollable factors, such as daily pandemic fluctuations. Pivot quickly and you’ll not only emerge as leaders of your industry, you may even gain immeasurable consumer intimacy. A true cloud transformation should start with a plan to shift significant capabilities to cloud. It is more than just migrating a few enterprise applications. Implementing a “cloud first” strategy requires companies to completely reinvent their business for cloud by reimagining their products or services, workforce, and customer experiences.Quote for the day:
"Don't try to be the "next". Instead, try to be the other, the changer, the new." -- Seth Godin
No comments:
Post a Comment