Software Defined Perimeter (SDP): The deployment
SDP architectures are user-centric; meaning they validate the user and the device before permitting any access. Access policies are created based on user attributes. This is in comparison to the traditional networking systems, which are solely based on the IP addresses that do not consider the details of the user and their devices. Assessing contextual information is a key aspect of SDP. Anything tied to IP is ridiculous as we don’t have a valid hook to hang things on for security policy enforcement. We need to assess more than the IP address. For device and user assessment, we need to look deeper and go beyond IP not only as an anchor for network location but also for trust. Ideally, a viable SDP solution must analyze the user, role, device, location, time, network and application, along with the endpoint security state. Also, by leveraging the elements, such as directory group membership and IAM-assigned attributes and user roles, an organization can define and control access to the network resources. This can be performed in a way that’s meaningful to the business, security, and compliance teams.
Troy Hunt: Why Data Breaches Persist
"Anecdotally, it just feels like we're seeing a massive increase recently," he says. "I do wonder how much of it is due to legislation in various parts of the world around mandatory disclosure as well. Maybe we're just seeing more stuff come to the surface that otherwise may not have been exposed." But the potential for even bigger breaches also continues to rise, he says. "I don't see any good reason why data breaches should be reducing, certainly not in numbers," Hunt says. "I reckon there are a bunch of factors ... that are amplifying certainly the rate of breaches and also the scale of them." Such factors, he says, include the ever-increasing amounts of data being generated by organizations and individuals, the increasing use of the cloud - and the ease of losing control of data in the cloud - as well as the many more internet of things devices being brought into the world. In a video interview at the recent Infosecurity Europe conference, Hunt discusses: Long-term forecasts about data breach quantity and severity; Why breach perpetrators so often continue to be children; and How so much "smart" technology aimed at children continues to be beset by abysmal security.
Explore 4 key areas of enterprise network transformation
The top issue among the IT professionals surveyed was a lack of time to complete business initiative projects -- 43% of respondents said they struggle with this. In addition, 42% of respondents said they struggle to troubleshoot across the network as a whole. These blind spots can impede NetOps, network performance quality and, therefore, network transformation. Overall, a poorly performing network negatively affected business performance as a whole, respondents said. As such, respondents said they would prioritize the following areas of network performance: application performance, remote site performance, and endpoint and wireless performance. These improvements were among the most common goals for networking and IT professionals, according to the study. To support these network transformation goals, 37% of teams said they hope to upgrade their network performance management service. Teams can address several network performance issues with improved end-to-end visibility of their network and more insight into specific network issues.
The Importance of Metrics to Agile Teams
Many programmes fail simply because teams could not agree or gain buy-in on meaningful sets of metrics or objectives. By its very nature, Agile encourages a myriad of different methodologies and workflows which vary by team and company. However, this does not mean that it’s impossible to agree achieve consensus on metrics for SI. We believe the trick is to keep metrics simple and deterministic. Complex metrics will not be commonly understood and can be hard to measure consistently, which can lead to distrust. And deterministic metrics are key as improving them will actually deliver a better outcome. As an example – you may measure Lead Times as an overall proxy of Time to Value, but Lead Time is a measure of the outcome. It’s also important to measure the things that drive/determine Lead Times, levers that teams can actively manage in order to drive improvements in the overarching metric (e.g. determinant metrics like Flow Efficiency). The deterministic metrics we advocate are designed to underpin team SI, in order to steadily improve Agile engineering effectiveness.
Microsoft’s road to multicloud support
An important part of Microsoft’s multicloud strategy is Azure Stack, which is preconfigured hardware to run Azure services that can be deployed locally. However, Kubernetes support on-premise via Azure Stack is behind the support for Kubernetes on the public Azure cloud. “We have Kubernetes on Azure Stack through a project called AKS Engine, which is in preview now,” says Monroy. He claims that AKS Engine will be generally available “soon”, adding: “We have a lot of customers who are using this today.” Serverless containers offer developers a way to achieve multicloud portability. In the Microsoft world, Azure AKS virtual nodes can be deployed to run workloads in Azure Container instances. “There is no lock-in, nothing Azure-specific – you just annotate your workloads and say ‘I want to opt in to this scaling capability’ and we’re able to provide per-second billing,” says Monroy. “If you take that same workload and you run it on a different cloud, it’s going to run.” But AKS virtual nodes are not yet available for Azure in the UK – although they are available elsewhere in Europe.
Data Governance and Data Architecture: There is No Silver Bullet
Having tools and technology facilitates the process of understanding the data, where it’s stored, how it’s organized, what the processes are, and how it’s all tied together, “but it’s not the ‘easy’ button that does everything for you.” Some companies have been trying to rely on metadata repositories alone, but the real key, he said, is in modeling. “A picture’s worth a thousand words, right?” Having the metadata and being able to do analytics and queries is helpful, but without pictures that explain how all the elements are related, and understanding the data lineage and life cycle, “You don’t have a chance.” Keeping higher-level business goals in mind is essential, but implementation should be focused on the fundamentals. “Metadata is a big piece of that too. A lot of the metadata is focused up at that higher level. Are your metadata management tools really getting down to the lower level?” Data and process modeling in particular are more important now than they’ve ever been before, he said, but that modeling should be coupled with the reverse engineering capabilities and all the tools and processes needed to do proper governance.
Disposable Technology: A Concept Whose Time Has Come
Modern digital companies like Google, Facebook, Twitter, Apple, Netflix, Amazon, and AirBnB have taken a technology architecture approach that increasingly treats the technology infrastructure as “disposable” using open source technologies. And the reason for this open approach, in my humble opinion, is two-fold: Firstly, building upon open source technologies provides the flexibility, agility and mobility for companies to move to the next best technology without the constraints ... Modern digital companies are basing their technology infrastructure on open source technologies that not only prevents vendor architectural lock-in but also allows them to advance the technology capabilities at their pace and at the pace of the business; and Secondly and more importantly, these digital companies understand that the technology isn’t the source of business value and differentiation. They understand that the source of business value and differentiation is: the data that these organizations are masterfully amassing via every customer engagement and every usage of the product or service; and the customer, product and operational insights that leads to new Intellectual Property (IP) monetization and commercialization opportunities.
Blue Prism acquires UK’s Thoughtonomy to expand its RPA platform with more AI
Robotic process automation — which lets organizations shift repetitive back-office tasks to machines to complete — has been a hot area of growth in the world of enterprise IT, and now one of the companies that’s making waves in the area has acquired a smaller startup to continue extending its capabilities. Blue Prism, which helped coin the term RPA when it was founded back in 2001, has announced that it is buying Thoughtonomy, which has built a cloud-based AI engine that delivers RPA-based solutions on a SaaS framework. Blue Prism is publicly traded on the London Stock Exchange — where its market cap is around £1.3 billion ($1.6 billion), and in a statement to the market alongside its half-year earnings, it said it would be paying up to £80 million ($100 million) for the firm. The deal is coming in a combination of cash and stock: £12.5 million payable on completion of the deal, £23 million in shares payable on completion of the deal, up to £20 million payable a year after the deal closes, up to £4.5 million in cash after 18 months, and a final £20 million on the second anniversary of the deal closing, in shares.
Codes Tell the Story: A Fruitful Supply Chain Flourishes
Sharing anecdotes from the process, McMillan gave the audience several practical tips. She noted that Usage and Procedure Logging (UPL) provided invaluable insights for the migration. “This tells you not only which objects you’re touching, but also which business processes they’re calling: Warehouse or inventory management? We used this to figure out what’s really being used,” with respect to custom coding. She said the results were very promising: “What we found out, in production, is that almost 60% of the custom code developed in the last 5-10 years, was not being used! I can’t tell you how many of those custom scripts were used just once, and never touched again.” This was fantastic news, because custom code can cause serious headaches when undergoing a migration of this magnitude. Every last bit of custom code needs to be vetted, which can be very time-consuming, and error-prone. “Some of the most tedious parts were really challenging,” McMillan said, “having to go through object by object took a lot of time; certain tables that SAP made obsolete; fields that the type has changed. When you do the immigration, you can’t code the same way used to code.”
Obscuring Complexity
How can MDSD obscure the complexity of your application code? It is tricky but it can be done. The generator outputs the code that implements the API resources, so the developers don't have to worry about coding that. However, if you use the generator as a one-time code wizard and commit the output to your version-controlled source code repository (e.g. git), then all you did was save some initial coding time. You didn't really hide anything, since the developers will have to study and maintain the generated code. To truly obscure the complexity of this code, you have to commit the model into your version-controlled source code repository, but not the generated source code. You need to generate that output source from the model every time you build the code. You will need to add that generator step to all your build pipelines. Maven users will want to configure the swagger-codegen-maven-plugin in their pom file. That plugin is a module in the swagger-codegen project. What if you do have to make changes to the generated source code? That is why you will have to assume ownership of the templates and also commit them to your version-controlled source code repository.
Quote for the day:
"Do not compromise yourself. You are all you have got." -- Janis Joplin
No comments:
Post a Comment