Author: alien

  • Khóa học miễn phí DynamoDB – Monitoring nhận dự án làm có lương

    DynamoDB – Monitoring



    Amazon offers CloudWatch for aggregating and analyzing performance through the CloudWatch console, command line, or CloudWatch API. You can also use it to set alarms and perform tasks. It performs specified actions on certain events.

    Cloudwatch Console

    Utilize CloudWatch by accessing the Management Console, and then opening the CloudWatch console at .

    You can then perform the following steps −

    • Select Metrics from the navigation pane.

    • Under DynamoDB metrics within the CloudWatch Metrics by Category pane, choose Table Metrics.

    • Use the upper pane to scroll below and examine the entire list of table metrics. The Viewing list provides metrics options.

    In the results interface, you can select/deselect each metric by selecting the checkbox beside the resource name and metric. Then you would be able to view graphs for each item.

    API Integration

    You can access CloudWatch with queries. Use metric values to perform CloudWatch actions. Note DynamoDB does not send metrics with a value of zero. It simply skips metrics for time periods where those metrics remain at that value.

    The following are some of the most commonly used metrics −

    • ConditionalCheckFailedRequests − It tracks the quantity of failed attempts at conditional writes such as conditional PutItem writes. The failed writes increment this metric by one on evaluation to false. It also throws an HTTP 400 error.

    • ConsumedReadCapacityUnits − It quantifies the capacity units used over a certain time period. You can use this to examine individual table and index consumption.

    • ConsumedWriteCapacityUnits − It quantifies the capacity units used over a certain time period. You can use this to examine individual table and index consumption.

    • ReadThrottleEvents − It quantifies requests exceeding provisioned capacity units in table/index reads. It increments on each throttle including batch operations with multiple throttles.

    • ReturnedBytes − It quantifies the bytes returned in retrieval operations within a certain time period.

    • ReturnedItemCount − It quantifies the items returned in Query and Scan operations over a certain time period. It addresses only items returned, not those evaluated, which are typically totally different figures.

    Note − There are many more metrics that exist, and most of these allow you to calculate averages, sums, maximum, minimum, and count.


    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc

  • Khóa học miễn phí DynamoDB – Data Backup nhận dự án làm có lương

    DynamoDB – Data Backup



    Utilize Data Pipeline”s import/export functionality to perform backups. How you execute a backup depends on whether you use the GUI console, or use Data Pipeline directly (API). Either create separate pipelines for each table when using the console, or import/export multiple tables in a single pipeline if using a direct option.

    Exporting and Importing Data

    You must create an Amazon S3 bucket prior to performing an export. You can export from one or more tables.

    Perform the following four step process to execute an export −

    Step 1 − Log in to the AWS Management Console and open the Data Pipeline console located at

    Step 2 − If you have no pipelines in the AWS region used, select Get started now. If you have one or more, select Create new pipeline.

    Step 3 − On the creation page, enter a name for your pipeline. Choose Build using a template for the Source parameter. Select Export DynamoDB table to S3 from the list. Enter the source table in the Source DynamoDB table name field.

    Enter the destination S3 bucket in the Output S3 Folder text box using the following format: s3://nameOfBucket/region/nameOfFolder. Enter an S3 destination for the log file in S3 location for logs text box.

    Step 4 − Select Activate after entering all settings.

    The pipeline may take several minutes to finish its creation process. Use the console to monitor its status. Confirm successful processing with the S3 console by viewing the exported file.

    Importing Data

    Successful imports can only happen if the following conditions are true: you created a destination table, the destination and source use identical names, and the destination and source use identical key schema.

    You can use a populated destination table, however, imports replace data items sharing a key with source items, and also add excess items to the table. The destination can also use a different region.

    Though you can export multiple sources, you can only import one per operation. You can perform an import by adhering to the following steps −

    Step 1 − Log in to the AWS Management Console, and then open the Data Pipeline console.

    Step 2 − If you are intending to execute a cross region import, then you should select the destination region.

    Step 3 − Select Create new pipeline.

    Step 4 − Enter the pipeline name in the Name field. Choose Build using a template for the Source parameter, and in the template list, select Import DynamoDB backup data from S3.

    Enter the location of the source file in the Input S3 Folder text box. Enter the destination table name in the Target DynamoDB table name field. Then enter the location for the log file in the S3 location for logs text box.

    Step 5 − Select Activate after entering all settings.

    The import starts immediately after the pipeline creation. It may take several minutes for the pipeline to complete the creation process.

    Errors

    When errors occur, the Data Pipeline console displays ERROR as the pipeline status. Clicking the pipeline with an error takes you to its detail page, which reveals every step of the process and the point at which the failure occurred. Log files within also provide some insight.

    You can review the common causes of the errors as follows −

    • The destination table for an import does not exist, or does not use identical key schema to the source.

    • The S3 bucket does not exist, or you do not have read/write permissions for it.

    • The pipeline timed out.

    • You do not have the necessary export/import permissions.

    • Your AWS account reached its resource limit.


    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc

  • Khóa học miễn phí DynamoDB – Data Pipeline nhận dự án làm có lương

    DynamoDB – Data Pipeline



    Data Pipeline allows for exporting and importing data to/from a table, file, or S3 bucket. This of course proves useful in backups, testing, and for similar needs or scenarios.

    In an export, you use the Data Pipeline console, which makes a new pipeline and launches an Amazon EMR (Elastic MapReduce) cluster to perform the export. An EMR reads data from DynamoDB and writes to the target. We discuss EMR in detail later in this tutorial.

    In an import operation, you use the Data Pipeline console, which makes a pipeline and launches EMR to perform the import. It reads data from the source and writes to the destination.

    Note − Export/import operations carry a cost given the services used, specifically, EMR and S3.

    Using Data Pipeline

    You must specify action and resource permissions when using Data Pipeline. You can utilize an IAM role or policy to define them. The users who are performing imports/exports should make a note that they would require an active access key ID and secret key.

    IAM Roles for Data Pipeline

    You need two IAM roles to use Data Pipeline −

    • DataPipelineDefaultRole − This has all the actions you permit the pipeline to perform for you.

    • DataPipelineDefaultResourceRole − This has resources you permit the pipeline to provision for you.

    If you are new to Data Pipeline, you must spawn each role. All the previous users possess these roles due to the existing roles.

    Use the IAM console to create IAM roles for Data Pipeline, and perform the following four steps −

    Step 1 − Log in to the IAM console located at

    Step 2 − Select Roles from the dashboard.

    Step 3 − Select Create New Role. Then enter DataPipelineDefaultRole in the Role Name field, and select Next Step. In the AWS Service Roles list in the Role Type panel, navigate to Data Pipeline, and choose Select. Select Create Role in the Review panel.

    Step 4 − Select Create New Role.


    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc

  • Khóa học miễn phí DynamoDB – Conditions nhận dự án làm có lương

    DynamoDB – Conditions



    In granting permissions, DynamoDB allows specifying conditions for them through a detailed IAM policy with condition keys. This supports settings like access to specific items and attributes.

    Note − The DynamoDB does not support any tags.

    Detailed Control

    Several conditions allow specificity down to items and attributes like granting read-only access to specific items based on user account. Implement this level of control with conditioned IAM policies, which manages the security credentials. Then simply apply the policy to the desired users, groups, and roles. Web Identity Federation, a topic discussed later, also provides a way to control user access through Amazon, Facebook, and Google logins.

    The condition element of IAM policy implements access control. You simply add it to a policy. An example of its use consists of denying or permitting access to table items and attributes. The condition element can also employ condition keys to limit permissions.

    You can review the following two examples of the condition keys −

    • dynamodb:LeadingKeys − It prevents the item access by users without an ID matching the partition key value.

    • dynamodb:Attributes − It prevents users from accessing or operating on attributes outside of those listed.

    On evaluation, IAM policies result in a true or false value. If any part evaluates to false, the whole policy evaluates to false, which results in denial of access. Be sure to specify all required information in condition keys to ensure users have appropriate access.

    Predefined Condition Keys

    AWS offers a collection of predefined condition keys, which apply to all services. They support a broad range of uses and fine detail in examining users and access.

    Note − There is case sensitivity in condition keys.

    You can review a selection of the following service-specific keys −

    • dynamodb:LeadingKey − It represents a table”s first key attribute; the partition key. Use the ForAllValues modifier in conditions.

    • dynamodb:Select − It represents a query/scan request Select parameter. It must be of the value ALL_ATTRIBUTES, ALL_PROJECTED_ATTRIBUTES, SPECIFIC_ATTRIBUTES, or COUNT.

    • dynamodb:Attributes − It represents an attribute name list within a request, or attributes returned from a request. Its values and their functions resemble API action parameters, e.g., BatchGetItem uses AttributesToGet.

    • dynamodb:ReturnValues − It represents a requests’ ReturnValues parameter, and can use these values: ALL_OLD, UPDATED_OLD, ALL_NEW, UPDATED_NEW, and NONE.

    • dynamodb:ReturnConsumedCapacity − It represents a request”s ReturnConsumedCapacity parameter, and can use these values: TOTAL and NONE.


    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc

  • Khóa học miễn phí DynamoDB – Permissions API nhận dự án làm có lương

    DynamoDB – Permissions API



    DynamoDB API offers a large set of actions, which require permissions. In setting permissions, you must establish the actions permitted, resources permitted, and conditions of each.

    You can specify actions within the Action field of the policy. Specify resource value within the Resource field of the policy. But do ensure that you use the correct syntax containing the Dynamodb: prefix with the API operation.

    For example − dynamodb:CreateTable

    You can also employ condition keys to filter permissions.

    Permissions and API Actions

    Take a good look at the API actions and associated permissions given in the following table −

    API Operation Necessary Permission
    BatchGetItem dynamodb:BatchGetItem
    BatchWriteItem dynamodb:BatchWriteItem
    CreateTable dynamodb:CreateTable
    DeleteItem dynamodb:DeleteItem
    DeleteTable dynamodb:DeleteTable
    DescribeLimits dynamodb:DescribeLimits
    DescribeReservedCapacity dynamodb:DescribeReservedCapacity
    DescribeReservedCapacityOfferings dynamodb:DescribeReservedCapacityOfferings
    DescribeStream dynamodb:DescribeStream
    DescribeTable dynamodb:DescribeTable
    GetItem dynamodb:GetItem
    GetRecords dynamodb:GetRecords
    GetShardIterator dynamodb:GetShardIterator
    ListStreams dynamodb:ListStreams
    ListTables dynamodb:ListTables
    PurchaseReservedCapacityOfferings dynamodb:PurchaseReservedCapacityOfferings
    PutItem dynamodb:PutItem
    Query dynamodb:Query
    Scan dynamodb:Scan
    UpdateItem dynamodb:UpdateItem
    UpdateTable dynamodb:UpdateTable

    Resources

    In the following table, you can review the resources associated with each permitted API action −

    API Operation Resource
    BatchGetItem arn:aws:dynamodb:region:account-id:table/table-name
    BatchWriteItem arn:aws:dynamodb:region:account-id:table/table-name
    CreateTable arn:aws:dynamodb:region:account-id:table/table-name
    DeleteItem arn:aws:dynamodb:region:account-id:table/table-name
    DeleteTable arn:aws:dynamodb:region:account-id:table/table-name
    DescribeLimits arn:aws:dynamodb:region:account-id:*
    DescribeReservedCapacity arn:aws:dynamodb:region:account-id:*
    DescribeReservedCapacityOfferings arn:aws:dynamodb:region:account-id:*
    DescribeStream arn:aws:dynamodb:region:account-id:table/table-name/stream/stream-label
    DescribeTable arn:aws:dynamodb:region:account-id:table/table-name
    GetItem arn:aws:dynamodb:region:account-id:table/table-name
    GetRecords arn:aws:dynamodb:region:account-id:table/table-name/stream/stream-label
    GetShardIterator arn:aws:dynamodb:region:account-id:table/table-name/stream/stream-label
    ListStreams arn:aws:dynamodb:region:account-id:table/table-name/stream/*
    ListTables *
    PurchaseReservedCapacityOfferings arn:aws:dynamodb:region:account-id:*
    PutItem arn:aws:dynamodb:region:account-id:table/table-name
    Query

    arn:aws:dynamodb:region:account-id:table/table-name

    or

    arn:aws:dynamodb:region:account-id:table/table-name/index/index-name

    Scan

    arn:aws:dynamodb:region:account-id:table/table-name

    or

    arn:aws:dynamodb:region:account-id:table/table-name/index/index-name

    UpdateItem arn:aws:dynamodb:region:account-id:table/table-name
    UpdateTable arn:aws:dynamodb:region:account-id:table/table-name

    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc

  • Khóa học miễn phí DynamoDB – Access Control nhận dự án làm có lương

    DynamoDB – Access Control



    DynamoDB uses credentials you provide to authenticate requests. These credentials are required and must include permissions for AWS resource access. These permissions span virtually every aspect of DynamoDB down to the minor features of an operation or functionality.

    Types of Permissions

    In this section, we will discuss regarding the various permissions and resource access in DynamoDB.

    Authenticating Users

    On signup, you provided a password and email, which serve as root credentials. DynamoDB associates this data with your AWS account, and uses it to give complete access to all resources.

    AWS recommends you use your root credentials only for the creation of an administration account. This allows you to create IAM accounts/users with less privileges. IAM users are other accounts spawned with the IAM service. Their access permissions/privileges include access to secure pages and certain custom permissions like table modification.

    The access keys provide another option for additional accounts and access. Use them to grant access, and also to avoid manual granting of access in certain situations. Federated users provide yet another option by allowing access through an identity provider.

    Administration

    AWS resources remain under ownership of an account. Permissions policies govern the permissions granted to spawn or access resources. Administrators associate permissions policies with IAM identities, meaning roles, groups, users, and services. They also attach permissions to resources.

    Permissions specify users, resources, and actions. Note administrators are merely accounts with administrator privileges.

    Operation and Resources

    Tables remain the main resources in DynamoDB. Subresources serve as additional resources, e.g., streams and indices. These resources use unique names, some of which are mentioned in the following table −

    Type ARN (Amazon Resource Name)
    Stream arn:aws:dynamodb:region:account-id:table/table-name/stream/stream-label
    Index arn:aws:dynamodb:region:account-id:table/table-name/index/index-name
    Table arn:aws:dynamodb:region:account-id:table/table-name

    Ownership

    A resource owner is defined as an AWS account which spawned the resource, or principal entity account responsible for request authentication in resource creation. Consider how this functions within the DynamoDB environment −

    • In using root credentials to create a table, your account remains resource owner.

    • In creating an IAM user and granting the user permission to create a table, your account remains the resource owner.

    • In creating an IAM user and granting the user, and anyone capable of assuming the role, permission to create a table, your account remains the resource owner.

    Manage Resource Access

    Management of access mainly requires attention to a permissions policy describing users and resource access. You associate policies with IAM identities or resources. However, DynamoDB only supports IAM/identity policies.

    Identity-based (IAM) policies allow you to grant privileges in the following ways −

    • Attach permissions to users or groups.
    • Attach permissions to roles for cross-account permissions.

    Other AWS allow resource-based policies. These policies permit access to things like an S3 bucket.

    Policy Elements

    Policies define actions, effects, resources, and principals; and grant permission to perform these operations.

    Note − The API operations may require permissions for multiple actions.

    Take a closer look at the following policy elements −

    • Resource − An ARN identifies this.

    • Action − Keywords identify these resource operations, and whether to allow or deny.

    • Effect − It specifies the effect for a user request for an action, meaning allow or deny with denial as the default.

    • Principal − This identifies the user attached to the policy.

    Conditions

    In granting permissions, you can specify conditions for when policies become active such as on a particular date. Express conditions with condition keys, which include AWS systemwide keys and DynamoDB keys. These keys are discussed in detail later in the tutorial.

    Console Permissions

    A user requires certain basic permissions to use the console. They also require permissions for the console in other standard services −

    • CloudWatch
    • Data Pipeline
    • Identity and Access Management
    • Notification Service
    • Lambda

    If the IAM policy proves too limited, the user cannot use the console effectively. Also, you do not need to worry about user permissions for those only calling the CLI or API.

    Common Use Iam Policies

    AWS covers common operations in permissions with standalone IAM managed policies. They provide key permissions allowing you to avoid deep investigations into what you must grant.

    Some of them are as follows −

    • AmazonDynamoDBReadOnlyAccess − It gives read-only access via the console.

    • AmazonDynamoDBFullAccess − It gives full access via the console.

    • AmazonDynamoDBFullAccesswithDataPipeline − It gives full access via the console and permits export/import with Data Pipeline.

    You can also ofcourse make custom policies.

    Granting Privileges: Using The Shell

    You can grant permissions with the Javascript shell. The following program shows a typical permissions policy −

    {
       "Version": "2016-05-22",
       "Statement": [
          {
             "Sid": "DescribeQueryScanToolsTable",
             "Effect": "Deny",
    
             "Action": [
                "dynamodb:DescribeTable",
                "dynamodb:Query",
                "dynamodb:Scan"
             ],
             "Resource": "arn:aws:dynamodb:us-west-2:account-id:table/Tools"
          }
       ]
    }
    

    You can review the three examples which are as follows −

    Block the user from executing any table action.

    {
       "Version": "2016-05-23",
       "Statement": [
          {
             "Sid": "AllAPIActionsOnTools",
             "Effect": "Deny",
             "Action": "dynamodb:*",
             "Resource": "arn:aws:dynamodb:us-west-2:155556789012:table/Tools"
          }
       ]
    }
    

    Block access to a table and its indices.

    {
       "Version": "2016-05-23",
       "Statement": [
          {
             "Sid": "AccessAllIndexesOnTools",
             "Effect": "Deny",
             "Action": [
                "dynamodb:*"
             ],
             "Resource": [
                "arn:aws:dynamodb:us-west-2:155556789012:table/Tools",
                "arn:aws:dynamodb:us-west-2:155556789012:table/Tools/index/*"
             ]
          }
       ]
    }
    

    Block a user from making a reserved capacity offering purchase.

    {
       "Version": "2016-05-23",
       "Statement": [
          {
             "Sid": "BlockReservedCapacityPurchases",
             "Effect": "Deny",
             "Action": "dynamodb:PurchaseReservedCapacityOfferings",
             "Resource": "arn:aws:dynamodb:us-west-2:155556789012:*"
          }
       ]
    }
    

    Granting Privileges: Using the GUI Console

    You can also use the GUI console to create IAM policies. To begin with, choose Tables from the navigation pane. In the table list, choose the target table and follow these steps.

    Step 1 − Select the Access control tab.

    Step 2 − Select the identity provider, actions, and policy attributes. Select Create policy after entering all settings.

    Step 3 − Choose Attach policy instructions, and complete each required step to associate the policy with the appropriate IAM role.


    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc

  • Khóa học miễn phí Web Identity Federation nhận dự án làm có lương

    DynamoDB – Web Identity Federation



    Web Identity Federation allows you to simplify authentication and authorization for large user groups. You can skip the creation of individual accounts, and require users to login to an identity provider to get temporary credentials or tokens. It uses AWS Security Token Service (STS) to manage credentials. Applications use these tokens to interact with services.

    Web Identity Federation also supports other identity providers such as – Amazon, Google, and Facebook.

    Function − In use, Web Identity Federation first calls an identity provider for user and app authentication, and the provider returns a token. This results in the app calling AWS STS and passing the token for input. STS authorizes the app and grants it temporary access credentials, which allow the app to use an IAM role and access resources based on policy.

    Implementing Web Identity Federation

    You must perform the following three steps prior to use −

    • Use a supported third party identity provider to register as a developer.

    • Register your application with the provider to obtain an app ID.

    • Create a single or multiple IAM roles, including policy attachment. You must use a role per provider per app.

    Assume one of your IAM roles to use Web Identity Federation. Your app must then perform a three-step process −

    • Authentication
    • Credential acquisition
    • Resource Access

    In the first step, your app uses its own interface to call the provider and then manages the token process.

    Then step two manages tokens and requires your app to send an AssumeRoleWithWebIdentity request to AWS STS. The request holds the first token, the provider app ID, and the ARN of the IAM role. The STS the provides credentials set to expire after a certain period.

    In the final step, your app receives a response from STS containing access information for DynamoDB resources. It consists of access credentials, expiration time, role, and role ID.


    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc

  • Khóa học miễn phí DynamoDB – Aggregation nhận dự án làm có lương

    DynamoDB – Aggregation



    DynamoDB does not provide aggregation functions. You must make creative use of queries, scans, indices, and assorted tools to perform these tasks. In all this, the throughput expense of queries/scans in these operations can be heavy.

    You also have the option to use libraries and other tools for your preferred DynamoDB coding language. Ensure their compatibility with DynamoDB prior to using it.

    Calculate Maximum or Minimum

    Utilize the ascending/descending storage order of results, the Limit parameter, and any parameters which set order to find the highest and lowest values.

    For example −

    Map<String, AttributeValue> eaval = new HashMap<>();
    eaval.put(":v1", new AttributeValue().withS("hashval"));
    queryExpression = new DynamoDBQueryExpression<Table>()
       .withIndexName("yourindexname")
       .withKeyConditionExpression("HK = :v1")
       .withExpressionAttributeValues(values)
       .withScanIndexForward(false);                //descending order
    
    queryExpression.setLimit(1);
    QueryResultPage<Lookup> res =
       dynamoDBMapper.queryPage(Table.class, queryExpression);
    

    Calculate Count

    Use DescribeTable to get a count of the table items, however, note that it provides stale data. Also, utilize the Java getScannedCount method.

    Utilize LastEvaluatedKey to ensure it delivers all results.

    For example −

    ScanRequest scanRequest = new ScanRequest().withTableName(yourtblName);
    ScanResult yourresult = client.scan(scanRequest);
    System.out.println("#items:" + yourresult.getScannedCount());
    

    Calculating Average and Sum

    Utilize indices and a query/scan to retrieve and filter values before processing. Then simply operate on those values through an object.


    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc

  • Khóa học miễn phí Local Secondary Indexes nhận dự án làm có lương

    DynamoDB – Local Secondary Indexes



    Some applications only perform queries with the primary key, but some situations benefit from an alternate sort key. Allow your application a choice by creating a single or multiple local secondary indexes.

    Complex data access requirements, such as combing millions of items, make it necessary to perform more efficient queries/scans. Local secondary indices provide an alternate sort key for a partition key value. They also hold copies of all or some table attributes. They organize data by table partition key, but use a different sort key.

    Using a local secondary index removes the need for a whole table scan, and allows a simple and quick query using a sort key.

    All the local secondary indexes must satisfy certain conditions −

    • Identical partition key and source table partition key.
    • A sort key of only one scalar attribute.
    • Projection of the source table sort key acting as a non-key attribute.

    All the local secondary indexes automatically hold partition and sort keys from parent tables. In queries, this means efficient gathering of projected attributes, and also retrieval of attributes not projected.

    The storage limit for a local secondary index remains 10GB per partition key value, which includes all table items, and index items sharing a partition key value.

    Projecting an Attribute

    Some operations require excess reads/fetching due to complexity. These operations can consume substantial throughput. Projection allows you to avoid costly fetching and perform rich queries by isolating these attributes. Remember projections consist of attributes copied into a secondary index.

    When making a secondary index, you specify the attributes projected. Recall the three options provided by DynamoDB: KEYS_ONLY, INCLUDE, and ALL.

    When opting for certain attributes in projection, consider the associated cost tradeoffs −

    • If you project only a small set of necessary attributes, you dramatically reduce the storage costs.

    • If you project frequently accessed non-key attributes, you offset scan costs with storage costs.

    • If you project most or all non-key attributes, this maximizes flexibility and reduces throughput (no retrievals); however, storage costs rise.

    • If you project KEYS_ONLY for frequent writes/updates and infrequent queries, it minimizes size, but maintains query preparation.

    Local Secondary Index Creation

    Use the LocalSecondaryIndex parameter of CreateTable to make a single or multiple local secondary indexes. You must specify one non-key attribute for the sort key. On table creation, you create local secondary indices. On deletion, you delete these indexes.

    Tables with a local secondary index must obey a limit of 10GB in size per partition key value, but can store any amount of items.

    Local Secondary Index Queries and Scans

    A query operation on local secondary indexes returns all items with a matching partition key value when multiple items in the index share sort key values. Matching items do not return in a certain order. Queries for local secondary indexes use either eventual or strong consistency, with strongly consistent reads delivering the latest values.

    A scan operation returns all local secondary index data. Scans require you to provide a table and index name, and allow the use of a filter expression to discard data.

    Item Writing

    On creation of a local secondary index, you specify a sort key attribute and its data type. When you write an item, its type must match the data type of the key schema if the item defines an attribute of an index key.

    DynamoDB imposes no one-to-one relationship requirements on table items and local secondary index items. The tables with multiple local secondary indexes carry higher write costs than those with less.

    Throughput Considerations in Local Secondary Indexes

    Read capacity consumption of a query depends on the nature of data access. Queries use either eventual or strong consistency, with strongly consistent reads using one unit compared to half a unit in eventually consistent reads.

    Result limitations include a 1MB size maximum. Result sizes come from the sum of matching index item size rounded up to the nearest 4KB, and matching table item size also rounded up to the nearest 4KB.

    The write capacity consumption remains within provisioned units. Calculate the total provisioned cost by finding the sum of consumed units in table writing and consumed units in updating indices.

    You can also consider the key factors influencing cost, some of which can be −

    • When you write an item defining an indexed attribute or update an item to define an undefined indexed attribute, a single write operation occurs.

    • When a table update changes an indexed key attribute value, two writes occur to delete and then – add an item.

    • When a write causes the deletion of an indexed attribute, one write occurs to remove the old item projection.

    • When an item does not exist within the index prior to or after an update, no writes occur.

    Local Secondary Index Storage

    On a table item write, DynamoDB automatically copies the right attribute set to the required local secondary indexes. This charges your account. The space used results from the sum of table primary key byte size, index key attribute byte size, any present projected attribute byte size, and 100 bytes in overhead for each index item.

    The estimate storage is got by estimating average index item size and multiplying by table item quantity.

    Using Java to Work with Local Secondary Indexes

    Create a local secondary index by first creating a DynamoDB class instance. Then, create a CreateTableRequest class instance with necessary request information. Finally, use the createTable method.

    Example

    DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient(
       new ProfileCredentialsProvider()));
    String tableName = "Tools";
    CreateTableRequest createTableRequest = new
       CreateTableRequest().withTableName(tableName);
    
    //Provisioned Throughput
    createTableRequest.setProvisionedThroughput (
       new ProvisionedThroughput()
       .withReadCapacityUnits((long)5)
       .withWriteCapacityUnits(( long)5));
    
    //Attributes
    ArrayList<AttributeDefinition> attributeDefinitions =
       new ArrayList<AttributeDefinition>();
       attributeDefinitions.add(new AttributeDefinition()
       .withAttributeName("Make")
       .withAttributeType("S"));
    
    attributeDefinitions.add(new AttributeDefinition()
       .withAttributeName("Model")
       .withAttributeType("S"));
    
    attributeDefinitions.add(new AttributeDefinition()
       .withAttributeName("Line")
       .withAttributeType("S"));
    
    createTableRequest.setAttributeDefinitions(attributeDefinitions);
    
    //Key Schema
    ArrayList<KeySchemaElement> tableKeySchema = new
       ArrayList<KeySchemaElement>();
    
    tableKeySchema.add(new KeySchemaElement()
       .withAttributeName("Make")
       .withKeyType(KeyType.HASH));                    //Partition key
    
    tableKeySchema.add(new KeySchemaElement()
       .withAttributeName("Model")
       .withKeyType(KeyType.RANGE));                   //Sort key
    
    createTableRequest.setKeySchema(tableKeySchema);
    ArrayList<KeySchemaElement> indexKeySchema = new
       ArrayList<KeySchemaElement>();
    
    indexKeySchema.add(new KeySchemaElement()
       .withAttributeName("Make")
       .withKeyType(KeyType.HASH));                   //Partition key
    
    indexKeySchema.add(new KeySchemaElement()
       .withAttributeName("Line")
       .withKeyType(KeyType.RANGE));                   //Sort key
    
    Projection projection = new Projection()
       .withProjectionType(ProjectionType.INCLUDE);
    
    ArrayList<String> nonKeyAttributes = new ArrayList<String>();
    nonKeyAttributes.add("Type");
    nonKeyAttributes.add("Year");
    projection.setNonKeyAttributes(nonKeyAttributes);
    
    LocalSecondaryIndex localSecondaryIndex = new LocalSecondaryIndex()
       .withIndexName("ModelIndex")
       .withKeySchema(indexKeySchema)
       .withProjection(p rojection);
    
    ArrayList<LocalSecondaryIndex> localSecondaryIndexes = new
       ArrayList<LocalSecondaryIndex>();
    
    localSecondaryIndexes.add(localSecondaryIndex);
    createTableRequest.setLocalSecondaryIndexes(localSecondaryIndexes);
    Table table = dynamoDB.createTable(createTableRequest);
    System.out.println(table.getDescription());
    

    Retrieve information about a local secondary index with the describe method. Simply create a DynamoDB class instance, create a Table class instance, and pass the table to the describe method.

    Example

    DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient(
       new ProfileCredentialsProvider()));
    
    String tableName = "Tools";
    Table table = dynamoDB.getTable(tableName);
    TableDescription tableDescription = table.describe();
    
    List<LocalSecondaryIndexDescription> localSecondaryIndexes =
       tableDescription.getLocalSecondaryIndexes();
    
    Iterator<LocalSecondaryIndexDescription> lsiIter =
       localSecondaryIndexes.iterator();
    
    while (lsiIter.hasNext()) {
       LocalSecondaryIndexDescription lsiDescription = lsiIter.next();
       System.out.println("Index info " + lsiDescription.getIndexName() + ":");
       Iterator<KeySchemaElement> kseIter = lsiDescription.getKeySchema().iterator();
    
       while (kseIter.hasNext()) {
          KeySchemaElement kse = kseIter.next();
          System.out.printf("t%s: %sn", kse.getAttributeName(), kse.getKeyType());
       }
    
       Projection projection = lsiDescription.getProjection();
       System.out.println("tProjection type: " + projection.getProjectionType());
    
       if (projection.getProjectionType().toString().equals("INCLUDE")) {
          System.out.println("ttNon-key projected attributes: " +
             projection.getNonKeyAttributes());
       }
    }
    

    Perform a query by using the same steps as a table query. Merely create a DynamoDB class instance, a Table class instance, an Index class instance, a query object, and utilize the query method.

    Example

    DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient(
       new ProfileCredentialsProvider()));
    
    String tableName = "Tools";
    Table table = dynamoDB.getTable(tableName);
    Index index = table.getIndex("LineIndex");
    QuerySpec spec = new QuerySpec()
       .withKeyConditionExpression("Make = :v_make and Line = :v_line")
       .withValueMap(new ValueMap()
       .withString(":v_make", "Depault")
       .withString(":v_line", "SuperSawz"));
    
    ItemCollection<QueryOutcome> items = index.query(spec);
    Iterator<Item> itemsIter = items.iterator();
    
    while (itemsIter.hasNext()) {
       Item item = itemsIter.next();
       System.out.println(item.toJSONPretty());
    }
    

    You can also review the following example.

    Note − The following example may assume a previously created data source. Before attempting to execute, acquire supporting libraries and create necessary data sources (tables with required characteristics, or other referenced sources).

    The following example also uses Eclipse IDE, an AWS credentials file, and the AWS Toolkit within an Eclipse AWS Java Project.

    Example

    import java.util.ArrayList;
    import java.util.Iterator;
    
    import com.amazonaws.auth.profile.ProfileCredentialsProvider;
    import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;
    
    import com.amazonaws.services.dynamodbv2.document.DynamoDB;
    import com.amazonaws.services.dynamodbv2.document.Index;
    import com.amazonaws.services.dynamodbv2.document.Item;
    import com.amazonaws.services.dynamodbv2.document.ItemCollection;
    import com.amazonaws.services.dynamodbv2.document.PutItemOutcome;
    import com.amazonaws.services.dynamodbv2.document.QueryOutcome;
    import com.amazonaws.services.dynamodbv2.document.Table;
    import com.amazonaws.services.dynamodbv2.document.spec.QuerySpec;
    import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
    
    import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
    import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
    import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
    import com.amazonaws.services.dynamodbv2.model.KeyType;
    import com.amazonaws.services.dynamodbv2.model.LocalSecondaryIndex;
    import com.amazonaws.services.dynamodbv2.model.Projection;
    import com.amazonaws.services.dynamodbv2.model.ProjectionType;
    import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
    import com.amazonaws.services.dynamodbv2.model.ReturnConsumedCapacity;
    import com.amazonaws.services.dynamodbv2.model.Select;
    
    public class LocalSecondaryIndexSample {
       static DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient(
          new ProfileCredentialsProvider()));
       public static String tableName = "ProductOrders";
    
       public static void main(String[] args) throws Exception {
          createTable();
          query(null);
          query("IsOpenIndex");
          query("OrderCreationDateIndex");
       }
       public static void createTable() {
          CreateTableRequest createTableRequest = new CreateTableRequest()
             .withTableName(tableName)
             .withProvisionedThroughput(new ProvisionedThroughput()
             .withReadCapacityUnits((long) 1)
             .withWriteCapacityUnits((long) 1));
    
          // Table partition and sort keys attributes
          ArrayList<AttributeDefinition> attributeDefinitions = new
             ArrayList<AttributeDefinition>();
    
          attributeDefinitions.add(new AttributeDefinition()
             .withAttributeName("CustomerID")
             .withAttributeType("S"));
    
          attributeDefinitions.add(new AttributeDefinition()
             .withAttributeName("OrderID")
             .withAttributeType("N"));
    
          // Index primary key attributes
          attributeDefinitions.add(new AttributeDefinition()
             .withAttributeName("OrderDate")
             .withAttributeType("N"));
    
          attributeDefinitions.add(new AttributeDefinition()
             .withAttributeName("OpenStatus")
             .withAttributeType("N"));
          createTableRequest.setAttributeDefinitions(attributeDefinitions);
    
          // Table key schema
          ArrayList<KeySchemaElement> tableKeySchema = new
             ArrayList<KeySchemaElement>();
          tableKeySchema.add(new KeySchemaElement()
             .withAttributeName("CustomerID")
             .withKeyType(KeyType.HASH));                    //Partition key
    
          tableKeySchema.add(new KeySchemaElement()
             .withAttributeName("OrderID")
             .withKeyType(KeyType.RANGE));                   //Sort key
    
          createTableRequest.setKeySchema(tableKeySchema);
          ArrayList<LocalSecondaryIndex> localSecondaryIndexes = new
             ArrayList<LocalSecondaryIndex>();
    
          // OrderDateIndex
          LocalSecondaryIndex orderDateIndex = new LocalSecondaryIndex()
             .withIndexName("OrderDateIndex");
    
          // OrderDateIndex key schema
          ArrayList<KeySchemaElement> indexKeySchema = new
             ArrayList<KeySchemaElement>();
          indexKeySchema.add(new KeySchemaElement()
             .withAttributeName("CustomerID")
             .withKeyType(KeyType.HASH));                   //Partition key
    
          indexKeySchema.add(new KeySchemaElement()
             .withAttributeName("OrderDate")
             .withKeyType(KeyType.RANGE));                   //Sort key
          orderDateIndex.setKeySchema(indexKeySchema);
    
          // OrderCreationDateIndex projection w/attributes list
          Projection projection = new Projection()
             .withProjectionType(ProjectionType.INCLUDE);
    
          ArrayList<String> nonKeyAttributes = new ArrayList<String>();
          nonKeyAttributes.add("ProdCat");
          nonKeyAttributes.add("ProdNomenclature");
          projection.setNonKeyAttributes(nonKeyAttributes);
          orderCreationDateIndex.setProjection(projection);
          localSecondaryIndexes.add(orderDateIndex);
    
          // IsOpenIndex
          LocalSecondaryIndex isOpenIndex = new LocalSecondaryIndex()
             .withIndexName("IsOpenIndex");
    
          // OpenStatusIndex key schema
          indexKeySchema = new ArrayList<KeySchemaElement>();
          indexKeySchema.add(new KeySchemaElement()
             .withAttributeName("CustomerID")
             .withKeyType(KeyType.HASH));                   //Partition key
    
          indexKeySchema.add(new KeySchemaElement()
             .withAttributeName("OpenStatus")
             .withKeyType(KeyType.RANGE));                   //Sort key
    
          // OpenStatusIndex projection
          projection = new Projection() .withProjectionType(ProjectionType.ALL);
          OpenStatusIndex.setKeySchema(indexKeySchema);
          OpenStatusIndex.setProjection(projection);
          localSecondaryIndexes.add(OpenStatusIndex);
    
          // Put definitions in CreateTable request
          createTableRequest.setLocalSecondaryIndexes(localSecondaryIndexes);
          System.out.println("Spawning table " + tableName + "...");
          System.out.println(dynamoDB.createTable(createTableRequest));
    
          // Pause for ACTIVE status
          System.out.println("Waiting for ACTIVE table:" + tableName);
          try {
             Table table = dynamoDB.getTable(tableName);
             table.waitForActive();
          } catch (InterruptedException e) {
             e.printStackTrace();
          }
       }
       public static void query(String indexName) {
          Table table = dynamoDB.getTable(tableName);
          System.out.println("n*************************************************n");
          System.out.println("Executing query on" + tableName);
          QuerySpec querySpec = new QuerySpec()
             .withConsistentRead(true)
             .withScanIndexForward(true)
             .withReturnConsumedCapacity(ReturnConsumedCapacity.TOTAL);
    
          if (indexName == "OpenStatusIndex") {
             System.out.println("nEmploying index: ''" + indexName
                + "'' open orders for this customer.");
    
             System.out.println(
                "Returns only user-specified attribute listn");
             Index index = table.getIndex(indexName);
    
             querySpec.withKeyConditionExpression("CustomerID = :v_custmid and
                OpenStatus = :v_openstat")
                .withValueMap(new ValueMap()
                .withString(":v_custmid", "jane@sample.com")
                .withNumber(":v_openstat", 1));
    
             querySpec.withProjectionExpression(
                "OrderDate, ProdCat, ProdNomenclature, OrderStatus");
                ItemCollection<QueryOutcome> items = index.query(querySpec);
                Iterator<Item> iterator = items.iterator();
                System.out.println("Printing query results...");
    
             while (iterator.hasNext()) {
                System.out.println(iterator.next().toJSONPretty());
             }
          } else if (indexName == "OrderDateIndex") {
             System.out.println("nUsing index: ''" + indexName
                + "'': this customer''s orders placed after 05/22/2016.");
             System.out.println("Projected attributes are returnedn");
             Index index = table.getIndex(indexName);
    
             querySpec.withKeyConditionExpression("CustomerID = :v_custmid and OrderDate
                >= :v_ordrdate")
                .withValueMap(new ValueMap()
                .withString(":v_custmid", "jane@sample.com")
                .withNumber(":v_ordrdate", 20160522));
    
             querySpec.withSelect(Select.ALL_PROJECTED_ATTRIBUTES);
             ItemCollection<QueryOutcome> items = index.query(querySpec);
             Iterator<Item> iterator = items.iterator();
             System.out.println("Printing query results...");
    
             while (iterator.hasNext()) {
                System.out.println(iterator.next().toJSONPretty());
             }
          } else {
             System.out.println("nNo index: All Jane''s orders by OrderID:n");
             querySpec.withKeyConditionExpression("CustomerID = :v_custmid")
                .withValueMap(new ValueMap()
                .withString(":v_custmid", "jane@example.com"));
    
             ItemCollection<QueryOutcome> items = table.query(querySpec);
             Iterator<Item> iterator = items.iterator();
             System.out.println("Printing query results...");
    
             while (iterator.hasNext()) {
                System.out.println(iterator.next().toJSONPretty());
             }
          }
       }
    }
    

    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc

  • Khóa học miễn phí Global Secondary Indexes nhận dự án làm có lương

    DynamoDB – Global Secondary Indexes



    Applications requiring various query types with different attributes can use a single or multiple global secondary indexes in performing these detailed queries.

    For example − A system keeping a track of users, their login status, and their time logged in. The growth of the previous example slows queries on its data.

    Global secondary indexes accelerate queries by organizing a selection of attributes from a table. They employ primary keys in sorting data, and require no key table attributes, or key schema identical to the table.

    All the global secondary indexes must include a partition key, with the option of a sort key. The index key schema can differ from the table, and index key attributes can use any top-level string, number, or binary table attributes.

    In a projection, you can use other table attributes, however, queries do not retrieve from parent tables.

    Attribute Projections

    Projections consist of an attribute set copied from table to secondary index. A Projection always occurs with the table partition key and sort key. In queries, projections allow DynamoDB access to any attribute of the projection; they essentially exist as their own table.

    In a secondary index creation, you must specify attributes for projection. DynamoDB offers three ways to perform this task −

    • KEYS_ONLY − All index items consist of table partition and sort key values, and index key values. This creates the smallest index.

    • INCLUDE − It includes KEYS_ONLY attributes and specified non-key attributes.

    • ALL − It includes all source table attributes, creating the largest possible index.

    Note the tradeoffs in projecting attributes into a global secondary index, which relate to throughput and storage cost.

    Consider the following points −

    • If you only need access to a few attributes, with low latency, project only those you need. This reduces storage and write costs.

    • If an application frequently accesses certain non-key attributes, project them because the storage costs pale in comparison to scan consumption.

    • You can project large sets of attributes frequently accessed, however, this carries a high storage cost.

    • Use KEYS_ONLY for infrequent table queries and frequent writes/updates. This controls size, but still offers good performance on queries.

    Global Secondary Index Queries and Scans

    You can utilize queries for accessing a single or multiple items in an index. You must specify index and table name, desired attributes, and conditions; with the option to return results in ascending or descending order.

    You can also utilize scans to get all index data. It requires table and index name. You utilize a filter expression to retrieve specific data.

    Table and Index Data Synchronization

    DynamoDB automatically performs synchronization on indexes with their parent table. Each modifying operation on items causes asynchronous updates, however, applications do not write to indexes directly.

    You need to understand the impact of DynamoDB maintenance on indices. On creation of an index, you specify key attributes and data types, which means on a write, those data types must match key schema data types.

    On item creation or deletion, indexes update in an eventually consistent manner, however, updates to data propagate in a fraction of a second (unless system failure of some type occurs). You must account for this delay in applications.

    Throughput Considerations in Global Secondary Indexes − Multiple global secondary indexes impact throughput. Index creation requires capacity unit specifications, which exist separate from the table, resulting in operations consuming index capacity units rather than table units.

    This can result in throttling if a query or write exceeds provisioned throughput. View throughput settings by using DescribeTable.

    Read Capacity − Global secondary indexes deliver eventual consistency. In queries, DynamoDB performs provision calculations identical to that used for tables, with a lone difference of using index entry size rather than item size. The limit of a query returns remains 1MB, which includes attribute name size and values across every returned item.

    Write Capacity

    When write operations occur, the affected index consumes write units. Write throughput costs are the sum of write capacity units consumed in table writes and units consumed in index updates. A successful write operation requires sufficient capacity, or it results in throttling.

    Write costs also remain dependent on certain factors, some of which are as follows −

    • New items defining indexed attributes or item updates defining undefined indexed attributes use a single write operation to add the item to the index.

    • Updates changing indexed key attribute value use two writes to delete an item and write a new one.

    • A table write triggering deletion of an indexed attribute uses a single write to erase the old item projection in the index.

    • Items absent in the index prior to and after an update operation use no writes.

    • Updates changing only projected attribute value in the index key schema, and not indexed key attribute value, use one write to update values of projected attributes into the index.

    All these factors assume an item size of less than or equal to 1KB.

    Global Secondary Index Storage

    On an item write, DynamoDB automatically copies the right set of attributes to any indices where the attributes must exist. This impacts your account by charging it for table item storage and attribute storage. The space used results from the sum of these quantities −

    • Byte size of table primary key
    • Byte size of index key attribute
    • Byte size of projected attributes
    • 100 byte-overhead per index item

    You can estimate storage needs through estimating average item size and multiplying by the quantity of the table items with the global secondary index key attributes.

    DynamoDB does not write item data for a table item with an undefined attribute defined as an index partition or sort key.

    Global Secondary Index Crud

    Create a table with global secondary indexes by using the CreateTable operation paired with the GlobalSecondaryIndexes parameter. You must specify an attribute to serve as the index partition key, or use another for the index sort key. All index key attributes must be string, number, or binary scalars. You must also provide throughput settings, consisting of ReadCapacityUnits and WriteCapacityUnits.

    Use UpdateTable to add global secondary indexes to existing tables using the GlobalSecondaryIndexes parameter once again.

    In this operation, you must provide the following inputs −

    • Index name
    • Key schema
    • Projected attributes
    • Throughput settings

    By adding a global secondary index, it may take a substantial time with large tables due to item volume, projected attributes volume, write capacity, and write activity. Use CloudWatch metrics to monitor the process.

    Use DescribeTable to fetch status information for a global secondary index. It returns one of four IndexStatus for GlobalSecondaryIndexes −

    • CREATING − It indicates the build stage of the index, and its unavailability.

    • ACTIVE − It indicates the readiness of the index for use.

    • UPDATING − It indicates the update status of throughput settings.

    • DELETING − It indicates the delete status of the index, and its permanent unavailability for use.

    Update global secondary index provisioned throughput settings during the loading/backfilling stage (DynamoDB writing attributes to an index and tracking added/deleted/updated items). Use UpdateTable to perform this operation.

    You should remember that you cannot add/delete other indices during the backfilling stage.

    Use UpdateTable to delete global secondary indexes. It permits deletion of only one index per operation, however, you can run multiple operations concurrently, up to five. The deletion process does not affect the read/write activities of the parent table, but you cannot add/delete other indices until the operation completes.

    Using Java to Work with Global Secondary Indexes

    Create a table with an index through CreateTable. Simply create a DynamoDB class instance, a CreateTableRequest class instance for request information, and pass the request object to the CreateTable method.

    The following program is a short example −

    DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient (
       new ProfileCredentialsProvider()));
    
    // Attributes
    ArrayList<AttributeDefinition> attributeDefinitions = new
       ArrayList<AttributeDefinition>();
    attributeDefinitions.add(new AttributeDefinition()
       .withAttributeName("City")
       .withAttributeType("S"));
    
    attributeDefinitions.add(new AttributeDefinition()
       .withAttributeName("Date")
       .withAttributeType("S"));
    
    attributeDefinitions.add(new AttributeDefinition()
       .withAttributeName("Wind")
       .withAttributeType("N"));
    
    // Key schema of the table
    ArrayList<KeySchemaElement> tableKeySchema = new ArrayList<KeySchemaElement>();
    tableKeySchema.add(new KeySchemaElement()
       .withAttributeName("City")
       .withKeyType(KeyType.HASH));              //Partition key
    
    tableKeySchema.add(new KeySchemaElement()
       .withAttributeName("Date")
       .withKeyType(KeyType.RANGE));             //Sort key
    
    // Wind index
    GlobalSecondaryIndex windIndex = new GlobalSecondaryIndex()
       .withIndexName("WindIndex")
       .withProvisionedThroughput(new ProvisionedThroughput()
       .withReadCapacityUnits((long) 10)
       .withWriteCapacityUnits((long) 1))
       .withProjection(new Projection().withProjectionType(ProjectionType.ALL));
    
    ArrayList<KeySchemaElement> indexKeySchema = new ArrayList<KeySchemaElement>();
    indexKeySchema.add(new KeySchemaElement()
       .withAttributeName("Date")
       .withKeyType(KeyType.HASH));              //Partition key
    
    indexKeySchema.add(new KeySchemaElement()
       .withAttributeName("Wind")
       .withKeyType(KeyType.RANGE));             //Sort key
    
    windIndex.setKeySchema(indexKeySchema);
    CreateTableRequest createTableRequest = new CreateTableRequest()
       .withTableName("ClimateInfo")
       .withProvisionedThroughput(new ProvisionedThroughput()
       .withReadCapacityUnits((long) 5)
       .withWriteCapacityUnits((long) 1))
       .withAttributeDefinitions(attributeDefinitions)
       .withKeySchema(tableKeySchema)
       .withGlobalSecondaryIndexes(windIndex);
    Table table = dynamoDB.createTable(createTableRequest);
    System.out.println(table.getDescription());
    

    Retrieve the index information with DescribeTable. First, create a DynamoDB class instance. Then create a Table class instance to target an index. Finally, pass the table to the describe method.

    Here is a short example −

    DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient (
       new ProfileCredentialsProvider()));
    
    Table table = dynamoDB.getTable("ClimateInfo");
    TableDescription tableDesc = table.describe();
    Iterator<GlobalSecondaryIndexDescription> gsiIter =
       tableDesc.getGlobalSecondaryIndexes().iterator();
    
    while (gsiIter.hasNext()) {
       GlobalSecondaryIndexDescription gsiDesc = gsiIter.next();
       System.out.println("Index data " + gsiDesc.getIndexName() + ":");
       Iterator<KeySchemaElement> kse7Iter = gsiDesc.getKeySchema().iterator();
    
       while (kseIter.hasNext()) {
          KeySchemaElement kse = kseIter.next();
          System.out.printf("t%s: %sn", kse.getAttributeName(), kse.getKeyType());
       }
       Projection projection = gsiDesc.getProjection();
       System.out.println("tProjection type: " + projection.getProjectionType());
    
       if (projection.getProjectionType().toString().equals("INCLUDE")) {
          System.out.println("ttNon-key projected attributes: "
             + projection.getNonKeyAttributes());
       }
    }
    

    Use Query to perform an index query as with a table query. Simply create a DynamoDB class instance, a Table class instance for the target index, an Index class instance for the specific index, and pass the index and query object to the query method.

    Take a look at the following code to understand better −

    DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient (
       new ProfileCredentialsProvider()));
    
    Table table = dynamoDB.getTable("ClimateInfo");
    Index index = table.getIndex("WindIndex");
    QuerySpec spec = new QuerySpec()
       .withKeyConditionExpression("#d = :v_date and Wind = :v_wind")
       .withNameMap(new NameMap()
       .with("#d", "Date"))
       .withValueMap(new ValueMap()
       .withString(":v_date","2016-05-15")
       .withNumber(":v_wind",0));
    
    ItemCollection<QueryOutcome> items = index.query(spec);
    Iterator<Item> iter = items.iterator();
    
    while (iter.hasNext()) {
       System.out.println(iter.next().toJSONPretty());
    }
    

    The following program is a bigger example for better understanding −

    Note − The following program may assume a previously created data source. Before attempting to execute, acquire supporting libraries and create necessary data sources (tables with required characteristics, or other referenced sources).

    This example also uses Eclipse IDE, an AWS credentials file, and the AWS Toolkit within an Eclipse AWS Java Project.

    import java.util.ArrayList;
    import java.util.Iterator;
    
    import com.amazonaws.auth.profile.ProfileCredentialsProvider;
    import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;
    import com.amazonaws.services.dynamodbv2.document.DynamoDB;
    import com.amazonaws.services.dynamodbv2.document.Index;
    import com.amazonaws.services.dynamodbv2.document.Item;
    import com.amazonaws.services.dynamodbv2.document.ItemCollection;
    import com.amazonaws.services.dynamodbv2.document.QueryOutcome;
    import com.amazonaws.services.dynamodbv2.document.Table;
    import com.amazonaws.services.dynamodbv2.document.spec.QuerySpec;
    import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
    
    import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
    import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
    import com.amazonaws.services.dynamodbv2.model.GlobalSecondaryIndex;
    import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
    import com.amazonaws.services.dynamodbv2.model.KeyType;
    import com.amazonaws.services.dynamodbv2.model.Projection;
    import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
    
    public class GlobalSecondaryIndexSample {
       static DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient (
          new ProfileCredentialsProvider()));
       public static String tableName = "Bugs";
       public static void main(String[] args) throws Exception {
          createTable();
          queryIndex("CreationDateIndex");
          queryIndex("NameIndex");
          queryIndex("DueDateIndex");
       }
       public static void createTable() {
          // Attributes
          ArrayList<AttributeDefinition> attributeDefinitions = new
             ArrayList<AttributeDefinition>();
          attributeDefinitions.add(new AttributeDefinition()
             .withAttributeName("BugID")
             .withAttributeType("S"));
    
          attributeDefinitions.add(new AttributeDefinition()
             .withAttributeName("Name")
             .withAttributeType("S"));
    
          attributeDefinitions.add(new AttributeDefinition()
             .withAttributeName("CreationDate")
             .withAttributeType("S"));
    
          attributeDefinitions.add(new AttributeDefinition()
             .withAttributeName("DueDate")
             .withAttributeType("S"));
    
          // Table Key schema
          ArrayList<KeySchemaElement> tableKeySchema = new ArrayList<KeySchemaElement>();
          tableKeySchema.add (new KeySchemaElement()
             .withAttributeName("BugID")
             .withKeyType(KeyType.HASH));              //Partition key
    
          tableKeySchema.add (new KeySchemaElement()
             .withAttributeName("Name")
             .withKeyType(KeyType.RANGE));             //Sort key
    
          // Indexes'' initial provisioned throughput
          ProvisionedThroughput ptIndex = new ProvisionedThroughput()
             .withReadCapacityUnits(1L)
             .withWriteCapacityUnits(1L);
    
          // CreationDateIndex
          GlobalSecondaryIndex creationDateIndex = new GlobalSecondaryIndex()
             .withIndexName("CreationDateIndex")
             .withProvisionedThroughput(ptIndex)
             .withKeySchema(new KeySchemaElement()
             .withAttributeName("CreationDate")
             .withKeyType(KeyType.HASH),               //Partition key
             new KeySchemaElement()
             .withAttributeName("BugID")
             .withKeyType(KeyType.RANGE))              //Sort key
             .withProjection(new Projection()
             .withProjectionType("INCLUDE")
             .withNonKeyAttributes("Description", "Status"));
    
          // NameIndex
          GlobalSecondaryIndex nameIndex = new GlobalSecondaryIndex()
             .withIndexName("NameIndex")
             .withProvisionedThroughput(ptIndex)
             .withKeySchema(new KeySchemaElement()
             .withAttributeName("Name")
             .withKeyType(KeyType.HASH),                  //Partition key
             new KeySchemaElement()
             .withAttributeName("BugID")
             .withKeyType(KeyType.RANGE))                 //Sort key
             .withProjection(new Projection()
             .withProjectionType("KEYS_ONLY"));
    
          // DueDateIndex
          GlobalSecondaryIndex dueDateIndex = new GlobalSecondaryIndex()
             .withIndexName("DueDateIndex")
             .withProvisionedThroughput(ptIndex)
             .withKeySchema(new KeySchemaElement()
             .withAttributeName("DueDate")
             .withKeyType(KeyType.HASH))               //Partition key
             .withProjection(new Projection()
             .withProjectionType("ALL"));
    
          CreateTableRequest createTableRequest = new CreateTableRequest()
             .withTableName(tableName)
             .withProvisionedThroughput( new ProvisionedThroughput()
             .withReadCapacityUnits( (long) 1)
             .withWriteCapacityUnits( (long) 1))
             .withAttributeDefinitions(attributeDefinitions)
             .withKeySchema(tableKeySchema)
             .withGlobalSecondaryIndexes(creationDateIndex, nameIndex, dueDateIndex);
             System.out.println("Creating " + tableName + "...");
             dynamoDB.createTable(createTableRequest);
    
          // Pause for active table state
          System.out.println("Waiting for ACTIVE state of " + tableName);
          try {
             Table table = dynamoDB.getTable(tableName);
             table.waitForActive();
          } catch (InterruptedException e) {
             e.printStackTrace();
          }
       }
       public static void queryIndex(String indexName) {
          Table table = dynamoDB.getTable(tableName);
          System.out.println
          ("n*****************************************************n");
          System.out.print("Querying index " + indexName + "...");
          Index index = table.getIndex(indexName);
          ItemCollection<QueryOutcome> items = null;
          QuerySpec querySpec = new QuerySpec();
    
          if (indexName == "CreationDateIndex") {
             System.out.println("Issues filed on 2016-05-22");
             querySpec.withKeyConditionExpression("CreationDate = :v_date and begins_with
                (BugID, :v_bug)")
                .withValueMap(new ValueMap()
                .withString(":v_date","2016-05-22")
                .withString(":v_bug","A-"));
             items = index.query(querySpec);
          } else if (indexName == "NameIndex") {
             System.out.println("Compile error");
             querySpec.withKeyConditionExpression("Name = :v_name and begins_with
                (BugID, :v_bug)")
                .withValueMap(new ValueMap()
                .withString(":v_name","Compile error")
                .withString(":v_bug","A-"));
             items = index.query(querySpec);
          } else if (indexName == "DueDateIndex") {
             System.out.println("Items due on 2016-10-15");
             querySpec.withKeyConditionExpression("DueDate = :v_date")
             .withValueMap(new ValueMap()
             .withString(":v_date","2016-10-15"));
             items = index.query(querySpec);
          } else {
             System.out.println("nInvalid index name");
             return;
          }
          Iterator<Item> iterator = items.iterator();
          System.out.println("Query: getting result...");
    
          while (iterator.hasNext()) {
             System.out.println(iterator.next().toJSONPretty());
          }
       }
    }
    

    Khóa học lập trình tại Toidayhoc vừa học vừa làm dự án vừa nhận lương: Khóa học lập trình nhận lương tại trung tâm Toidayhoc