Efficiency and Innovation: The Latest in AWS News
PointFive Team
April 25, 2024
Share

The cloud services industry is constantly changing with new features and products that aim to maximize savings and improve productivity released on a regular basis. Recently, we’ve seen several developments from AWS that, while varied, all relate to rendering management and cloud computing.

These changes promise to improve user capabilities and streamline operations, ranging from the release of AWS Deadline Cloud for task management in rendering to the dynamic management of public IPv4 addresses on EC2 instances. Let’s take a look.

Management of Public IPv4 Addresses on EC2 Instances

On EC2 instances, Amazon VPC offers a network interface configuration to dynamically add and remove an auto-assigned public IPv4 address

Customers who no longer need an automatically given public IPv4 address on their EC2 instance can use this feature to change the public IP configuration on the network interface to delete the public IPv4 address and, if necessary, attach a new one. 

Before today, it was impossible to remove a public IPv4 address that was automatically allocated to an EC2 instance; it stayed on the network interface for the duration of the EC2 instance.

Customers can now be more efficient and save money on public IPv4 thanks to the network interface's public IP configuration. Instead of recreating their applications on a new EC2 instance without an auto-assigned public IPv4 address, customers who are migrating to using a private IPv4 address for SSH using an EC2 instance connect endpoint, or who no longer require the auto-assigned public IPv4 address, can simply remove the auto-assigned public IPv4 address.

Cost Analysis and Allocation for Amazon EKS

The AWS Cost and Usage Reports (CUR) feature of Amazon Elastic Kubernetes Service (Amazon EKS) allows users to chargeback, optimize, and analyze the cost and usage of their Kubernetes apps. 

Customers have asked for more detailed information about how apps that share AWS resources, such as Amazon EC2 instances, and execute on Amazon EKS are allocated costs. According 49% of respondents to CNCF's most recent microsurvey survey on Cloud Native and Kubernetes FinOps, Kubernetes utilization has increased cloud spending. 

The survey also found that customers' degree of cost monitoring varied, with 40% of respondents estimating their Kubernetes expenses and 38% having no monitoring in place, meaning they have no accurate view of how much they’re spending. Respondents also revealed a discrepancy in the degree of cost monitoring that clients have implemented, with only 19% performing accurate showback and 2% for chargeback, while 38% have no monitoring systems in place at all. The new AWS feature, however, creates greater clarity and visibility into these costs.

Detailed Cost Allocation for Kubernetes Pods

Users can now easily divide their Amazon EC2 instance charges at the Kubernetes pod level using Split Cost Allocation Data, based on the real CPU and memory utilization of their Docker pods. 

The detailed cost information available at the container level allows users to examine the cost-effectiveness of their containerized apps and streamline the chargeback procedure for their business entities.

Upon activating Split Cost Allocation Data, all of a user’s Kubernetes pods in Amazon EKS clusters inside their Consolidated Billing family are found and ingested, together with information about the pods' CPU and memory requirements, namespace, and cluster. 

How the Cost Model Works

  • For every EC2 instance connected to an EKS cluster, split cost allocation data for EKS gathers the requested and actual utilization data of compute and memory resources. 
  • If enrolled in Amazon Managed Service for Prometheus, actual usage data is gathered. In the event that you chose to use the resource request, real use is regarded as 0. 
  • The assigned CPU and RAM data for each Kubernetes pod is then determined based on whether the difference between the requested amount and the used amount is bigger.
  • A single EC2 instance can support many Kubernetes pods, and the split cost allocation data for EKS calculates the CPU and memory allotted to each pod. The split-usage-ratio, or the percentage of CPU or memory allotted to each Kubernetes pod relative to the total CPU or memory on the EC2 instance, is then calculated. It also shows the instance's remaining unused capacity.
  • Split cost allocation data adds up the split cost when dividing the EC2 instance cost among all Kubernetes pods, and then it redistributes the unused cost proportionately based on pod instance utilization.

Amazon Q's generative BI features in QuickSight

The QuickSight version of Amazon Q is now widely accessible, allowing business analysts and users to quickly and simply create and consume insights using natural language.

Business users can swiftly extract important insights from data to guide choices, while business analysts' workload can be expedited. This enables firms to adopt a data-driven approach. 

The reinvention of business intelligence means that  users can quickly create a shareable document or presentation that explains data, extracts important insights and visualizations, and suggests the best course of action to grow your company. The enhanced data Q&A experience and simplified dashboard creation offer users the opportunity for greater work efficiency and better, more reliable, more secure data.

Customized Bedrock Model Import

Amazon Bedrock allows users to import bespoke models, which speeds up the development of generative AI applications. With this new functionality, users can apply previous investments in model modification within Amazon Bedrock and use them in the same fully managed way as Bedrock's current models. 

Users can import bespoke models from anywhere and access them at any time for supported architectures like Llama, Mistral, or Flan T5. As generative AI continues to quickly evolve, these models ave only grown in importance, with clients modifying them for specific use cases and commercial objectives. Bedrock’s import feature helps ensure better consistency for developers within these architectures while allowing greater access to customized models.

AWS Deadline Cloud

AWS also launched Deadline Cloud, a fully managed service that makes it easier for teams producing 2D and 3D visual assets for movies, TV series, ads, games, and industrial design to manage renders. 

Without requiring infrastructure management, Deadline Cloud makes it simple to quickly construct a cloud-based render farm that scales from zero to thousands of compute instances. 

With a wide range of customization options and pre-built integrations for Autodesk Arnold, Autodesk Maya, Foundry Nuke, Luxion KeyShot, and SideFX Houdini, Deadline Cloud helps you save time when building integrations with the team's favorite applications and customizing your render pipeline to your specifications. This increases users’ efficiency in creating renderings while allowing them to meet demands for high-resolution content. Its pay-as-you-go model also enables better cost control and scalability so users can stay within budgetary constraints.

Learn how PointFive works with AWS to give you complete visibility into your cloud optimization opportunities.

Share
Stay connected
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Find out more
How PointFive Enabled Cloud Cost Ownership and Action for Nubank Engineers
Read more
Our Future in Cloud Cost Optimization: A New Milestone
Read more
PointFive Secures $20M In Series A Funding to Accelerate Multi-Cloud Support
Read more
STARTING POINT

Discover deeper cloud efficiency with PointFive.

Book a Demo