For IT and business leaders, the message is clear. Technologists remain fully committed to the cause – they are desperate to have a positive impact, guide their organizations through the current crisis and leave a legacy of innovation. But it’s simply not sustainable (or fair) to ask technologists to continue as they are, when 91% say that they need to find a better work-life balance in 2021. As an industry and as business leaders, we need to be doing more to manage workload and stress, and protect wellbeing and mental health. Technologists have to be given more support to deal with the heightened level of complexity in which they are now operating. That means having access to the right tools, data, and resources, and organisations protecting their wellbeing, both inside and outside working hours. In 2018, we revealed that 9% of technologists were operating as Agents of Transformation – elite technologists with the skills, vision and ambition to deliver innovation within their organisations – but that organisations needed five times as many technologists to be performing at that level in order to compete over the next ten years.
Agile thinking is essential for not just the enterprise architect but many other IT jobs as well. However, enterprise architecture is one field in which it is essential. Agile thinking doesn’t just mean thinking fast, it means thinking fast and right. Agile thinking means that you have to adapt to situations as you improve your models and solutions. Being an agile thinker is key to having a successful career as a modern enterprise architect. As the market conditions change rapidly, you must adapt to all the changes and make the solutions robust. Data-Driven Decision Makers use facts and logic from the available information to make an informed decision. As many professionals say, everything you need is in the data available to you. Therefore, data-driven decision-making is an essential quality for any enterprise architect. This process will help identify management systems, operating routes, and much more that will align with your enterprise-level goals. One of the primary sources of data for decision-making are the users themselves. Companies usually collect data from the users in the sessions and use that data to analyze user behavior.
Unified security must be deployed broadly and consistently across every edge. Far too many organizations now own some edge environment that is unsecured or undersecured, and cybercriminals are taking full advantage of this. The most commonly unprotected/underprotected environments include home offices, mobile workers, and IoT devices. OT environments are also often less secure than they should be, as are large hyperscale/hyper-performance data centers where security tools cannot keep up with the speed and volume of traffic requiring inspection. Security solutions also need to be integrated so they can see and talk to each other. Isolated point security products can actually decrease visibility and control, especially as threat actors begin to deliver sophisticated, multi-vector attacks that take advantage of an outdated security system’s inability to correlate threat intelligence across devices or edges in real time, or provide a consistent, coordinated response to threats. Addressing this challenge requires an integrated approach, built around a unified security platform that can be extended to every new edge environment.
rMTD is the process of making an existing vulnerability difficult to exploit. This can be achieved through a variety of different techniques that are either static – built in during the compilation of the application, referred as Compile Time Application Self Protection (CASP) – or dynamically enforced during runtime, referred to as Runtime Application Self Protection (RASP). CASP and RASP are not mutually exclusive and can be combined. CASP modifies the application's generated assembly code during compilation in such a way that no two compilations generate the same assembly instruction set. Hackers rely on a known assembly layout from a generated static compilation in order to craft their attack. Once they've built their attack, they can target systems with the same binaries. They leverage the static nature of the compiled application or operating system to hijack systems. This is analogous to a thief getting a copy of the same safe you have and having the time to figure out how to crack it. The only difference is that in the case of a hacker, it's a lot easier to get their hands on a copy of the software than a safe, and the vulnerability is known and published.
Why is AIOps so slow to catch on? Ultimately, the barriers facing these tools are the same as those facing human engineers: massive and growing complexity in IT environments. As digital products become more dependent on third-party cloud services, as the number of things businesses want to track grows (from infrastructure to application to experience), the sheer volume, velocity and variety of monitoring data has exploded. ... Compounding the problem, enterprises increasingly rely on multiple “same-service” providers for IT services. That is, they use multiple cloud providers, multiple DNS providers, multiple API providers, etc. There are sound business reasons for doing so, such as adding resiliency and drawing on different vendors’ strengths in different areas. But even when two providers are doing basically the same thing, they use different interfaces and instrumentation, and their data sources often employ different metrics, data structures, and taxonomies. Whether you’re asking a human being or an AI-driven tool to solve this problem, this heterogeneity makes it extremely difficult to visualize the complete picture across the infrastructure. It also creates gray areas around how best to take advantage of each vendor’s different rules and toolsets.
Here’s the punchline: Everything relies on Active Directory. To get your boss to care, start with a discussion about operations and which parts are business critical. Have a business-level discussion, with you keeping score at a technical level. For example, when your boss says “Development needs to be running 100 percent of the time,” you work backward through all the systems, applications, and endpoints that need AD to function. Repeat this until you have a sufficient list of critical workloads and business operations that require AD be secure and functional. Next, talk about which of those environments need to be protected, which contain sensitive data, and which need to be resilient against a cyberattack. Let your boss talk while you just sit back, smile, and check off the boxes of everything that relies heavily on AD. Once you are armed with enough business ammo, have the technical discussion about how each of the business functions listed by your boss rely on AD to provide users access to data, applications, systems, and environments.
The main innovation behind this is that Vercel has placed the entire dev server technology, that before lived in a node process on your local machine, entirely in the web browser, Rauch said. “So, all the technology for transforming the front-end UI components is now entirely ‘dogfooded’ inside the web browser, and that’s giving us the next milestone in terms of developer performance,” he said. “It makes front-end development multiplayer instead of single player.” Moreover, by tapping into ServiceWorker, WebAssembly and ES Modules technology, Vercel makes everything that’s possible when you run Next.js on a local machine possible in the context of a remote collaboration. Next.js Live also works when offline and eliminates the need to run or operate remote virtual machines. Meanwhile, the Aurora team in the Google Chrome unit has been working on technology to advance Next.js and has delivered Conformance for Next.js and the Next.js Script Component. Rauch described Conformance as a co-pilot that helps the developer stay within certain guardrails for performance.
My fascination with AI began when I was in India, back in 1997, when I heard about IBM’s supercomputer Deep Blue defeating Garry Kasparov. This made top headlines then. After that, I wanted to explore more about this. However, access to research papers was really hard then, as I didn’t even own a computer or have access to the internet. I got introduced to computers by my father when I got access to a computer in his office at the age of 10. First thing I explored was Lotus Notes back then. With encouragement from my parents, I later pursued Computer Science Engineering. Later, when I started working, I read several IEEE research papers. I read papers like Smart games; beyond the Deep Blue horizon, Deep Blue’s hardware-software synergy. I was fascinated not only with AI then but also the application of AI to solve real problems. I was also passionate about Biomedical Engineering, which led me to books on Neural networks & AI for Biomedical Engineering and papers on Training Neural Networks for Computer-Aided Diagnosis. When it comes to machine learning, I am largely self-taught.
Access to up-to-the-minute information is essential for a CIO who hopes to maintain a strong supply chain. "Real-time data ensures that your supply team has the proper information required to make good, reliable decisions," Roberge said. "My advice is to automate as many data points as possible -- the fewer spreadsheets the better." ... Today's supply chain cannot be managed effectively or efficiently without adequate foundational tools, Furlong cautioned. "Appropriate technologies, implemented in a timely manner, can help an organization transform the supply chain and leapfrog the competition," he explained. "This includes everything from advanced predictive analytics to ... cutting-edge technologies such as blockchain, which is being used to track shipments at a micro level." CIOs also need to regularly assess and replace aging supply chain software, hardware, and network tools with modern systems leveraging both internal resources and third-party alliances. "Business requirements are changing rapidly, and supply chain technology ... must be flexible enough to handle complex business processes but also simplify supply chain processes," Furlong said.
Data warehouses have been a popular option since the 1980s, and revolutionised the data world we live in, enabling business intelligence tools to be plugged in to ask questions about the past, but looking at future insights is more difficult, and there are restrictions to the volume and formats of the data that can be analysed. Another option is data lakes, which on the other hand enable artificial intelligence (AI) to be utilised to ask questions about future scenarios. However, data lakes also have a weakness in that all data can be stored, cleaned and analysed, but can be quickly disorganised and become ‘data swamps’. Taking the best of both options, a new data architecture is emerging. Lakehouses are a technological breakthrough that finally allows businesses to both look to future scenarios and back to the past in the same space, at the same time, revolutionising the future of data capabilities. It’s the solution enterprises have been calling out for throughout the last decade at least; by combining the best elements of the data warehouse and data lake, the lakehouse enables enterprises to implement a superior data strategy, achieve better data management, and squeeze the full potential out of their data.
Quote for the day:
"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks