Google Cloud release notes

The following release notes cover the most recent changes over the last 30 days. For a comprehensive list, see the individual product release note pages .

You can see the latest product updates for all of Google Cloud on the Google Cloud release notes page.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/gcp-release-notes.xml

August 11, 2020

BigQuery

For flat-rate pricing, the minimum slot purchase is now 100 slots. Slots can be purchased in 100-slot increments.

Cloud Logging

Users now manage logs exclusions through logs sinks. As a result, custom roles that have the logging.sinks.* permissions can now control the volume of logs ingested into Cloud Logging through logs sinks.

We recommend that you review any custom roles with the logging.sinks.* permissions so that you can make adjustments as needed.

Beta release: You can now use Logs Buckets to centralize or divide your logs based on your needs. For information about this feature, refer to the Managing logs buckets guide.

August 10, 2020

Cloud Composer
  • New versions of Cloud Composer images: composer-1.11.2-airflow-1.10.3, composer-1.11.2-airflow-1.10.6, and composer-1.11.2-airflow-1.10.9. The default is composer-1.11.2-airflow-1.10.6. Upgrade your Cloud SDK to use features in this release.
  • Airflow 1.10.6 and 1.10.9: You can now specify a location argument when creating a BigQueryCheckOperator to use it in a different region from the Composer environment.
  • Fixed GKE setting incompatibilities that broke environment creation for Composer versions between 1.7.2 and 1.8.3.
  • When DAG serialization is on, plugins and DAGs are no longer synced when the Airflow web server starts up. This fixes web server failures when plugins use custom PyPI packages.
  • Fixed intermittent failures when triggering a DAG from the Airflow Web UI with DAG serialization turned on.
  • Fixed update operations (installing Python dependencies and upgrading environments) for domain-scoped projects.
  • Fixed a broken link to the Airflow documentation in Airflow 1.10.9.
Dialogflow

August 08, 2020

Config Connector

Added support for BigtableTable

Fix a bug where a CRD would be marked as uninstalling on a dryrun delete

August 07, 2020

Cloud Billing

You can now view a summary of all your spend-based committed use discounts (CUD) and purchase new commitments in the commitment dashboard. The dashboard lists the type of commitment, region it's located, current active commitments, term length, and the start and end dates for the commitment. See the documentation for more details.

Compute Engine

You can now update multiple instance properties using a single request from the command-line tool or the Compute Engine API to update multiple instance properties. For more information, see Updating instance properties.

August 06, 2020

AI Platform Deep Learning VM Image

M53 release

TensorFlow Enterprise 2.3 images, including images that support CUDA 11.0, are now available.

BigQuery

BigQuery is now available in the following regions: Oregon (us-west1), Belgium (europe-west1), and Netherlands (europe-west4).

BigQuery BI Engine

BigQuery BI Engine is now available following regions: Oregon (us-west1), Belgium (europe-west1), and Netherlands (europe-west4).

BigQuery Data Transfer Service

BigQuery Data Transfer Service is now available following regions: Oregon (us-west1), Belgium (europe-west1), and Netherlands (europe-west4).

BigQuery ML

BigQuery ML is now available following regions: Oregon (us-west1), Belgium (europe-west1), and Netherlands (europe-west4).

Cloud Billing

If you have a negotiated pricing contract associated with your Cloud Billing account, starting with your July 2020 invoice, the Cloud Billing report and the Cost Breakdown report now support displaying your costs calculated using list prices, displaying your negotiated savings as a separate credit. This view helps you see how much money you are saving on your Google Cloud costs because of your negotiated pricing contract.

For information on how to view your list costs and negotiated savings in reports, see the documentation:

Cloud Spanner

A new multi-region instance configuration is now available in North America - nam10 (Iowa/Salt Lake).

August 05, 2020

Anthos GKE on-prem

Cloud Monitoring error condition

Under certain conditions, the default Cloud Monitoring pod, deployed by default in each new cluster, can become unresponsive. When clusters are upgraded, for example, storage data can become corrupted when pods in statefulset/prometheus-stackdriver-k8s are restarted.

Specifically, monitoring pod stackdriver-prometheus-k8s-0 can be caught in a loop when corrupted data prevents prometheus-stackdriver-sidecar writing to the cluster storage PersistentVolume.

The error can be manually diagnosed and recovered following the steps below.

Diagnosing the Cloud Monitoring failure

When the monitoring pod has failed, the logs will report the following:

{"log":"level=warn ts=2020-04-08T22:15:44.557Z caller=queue_manager.go:534 component=queue_manager msg=\"Unrecoverable error sending samples to remote storage\" err=\"rpc error: code = InvalidArgument desc = One or more TimeSeries could not be written: One or more points were written more frequently than the maximum sampling period configured for the metric.: timeSeries[0-114]; Unknown metric: kubernetes.io/anthos/scheduler_pending_pods: timeSeries[196-198]\"\n","stream":"stderr","time":"2020-04-08T22:15:44.558246866Z"}

{"log":"level=info ts=2020-04-08T22:15:44.656Z caller=queue_manager.go:229 component=queue_manager msg=\"Remote storage stopped.\"\n","stream":"stderr","time":"2020-04-08T22:15:44.656798666Z"}

{"log":"level=error ts=2020-04-08T22:15:44.663Z caller=main.go:603 err=\"corruption after 29032448 bytes: unexpected non-zero byte in padded page\"\n","stream":"stderr","time":"2020-04-08T22:15:44.663707748Z"} {"log":"level=info ts=2020-04-08T22:15:44.663Z caller=main.go:605 msg=\"See you next time!\"\n","stream":"stderr","time":"2020-04-08T22:15:44.664000941Z"}

Recovering Cloud Monitoring

To recover Cloud Monitoring manually:

  1. Stop cluster monitoring. Scale down the stackdriver operator to prevent monitoring reconciliation:
    kubectl --kubeconfig /ADMIN_CLUSTER_KUBCONFIG --namespace kube-system scale deployment stackdriver-operator --replicas 0

  2. Delete the monitoring pipeline workloads: kubectl --kubeconfig /ADMIN_CLUSTER_KUBCONFIG --namespace kube-system delete statefulset stackdriver-prometheus-k8s

  3. Delete the monitoring pipeline PersistentVolumeClaims (PVCs) kubectl --kubeconfig /ADMIN_CLUSTER_KUBCONFIG --namespace kube-system delete pvc -l app=stackdriver-prometheus-k8s

  4. Restart cluster monitoring. Scale up the Stackdriver operator to reinstall a new monitoring pipeline and resume reconciliation: kubectl --kubeconfig /ADMIN_CLUSTER_KUBCONFIG --namespace kube-system scale deployment stackdriver-operator --replicas=1

Installer fails when creating vSphere datadisk

The GKE on-prem installer can fail if custom roles are bound at the wrong permissions level.

When the role binding is incorrect, creating a vSphere datadisk with govc hangs and the disk is created with a size equal to 0.

To fix the issue, you should bind the custom role at the vSphere vcenter level (root).

If you want to bind the custom role at the DC level (or lower than root), you also need to to bind the read-only role to the user at the root vCenter level.

For more information on role creation, see vCenter user account privileges.

Cloud Functions

Cloud Functions Java 11, Python 3.7 or 3.8, and Go 1.13 runtimes now build container images in the user's project, providing direct access to build logs and removing the preset build-time quota.

See Building Cloud Functions for details.

Istio on Google Kubernetes Engine

Starting with version 1.6, the Istio on GKE add-on uses the Istio Operator for installation and configuration. When you upgrade your cluster to 1.17.7-gke.8+, 1.17.8-gke.6+, or higher, the Istio 1.6 Operator and control plane are installed alongside the existing 1.4.x Istio control plane. The upgrade requires user action and follows the dual control plane upgrade process (referred to as canary upgrades in the Istio documentation). With a dual control plane upgrade, you can migrate to the 1.6 version by setting a label on your workloads to point to the new control plane and performing a rolling restart. To learn more, see Upgrading to Istio 1.6 with Operator.

Pub/Sub

Pub/Sub message ordering is now available at the beta launch stage.

August 04, 2020

AI Platform Training

Read a new guide to distributed PyTorch training. You can use this guide with pre-built PyTorch containers, which are in beta.

Anthos GKE on AWS

Anthos GKE on AWS 1.4.1-gke.17 is released. This release fixes a memory leak that causes clusters to become unresponsive.

To upgrade your clusters, perform the following steps:

  1. Restart your control plane instances.
  2. Upgrade your management service to aws-1.4.1-gke.17.
  3. Upgrade your user cluster's AWSCluster and AWSNodePools to 1.16.9-gke.15.

Use version 1.16.9-gke.15 for creating new clusters.

Compute Engine

You can attach a maximum of 24 local SSD partitions for 9 TB per instance. This is generally available on instances with N1 machine types. For more information, see Local SSDs.

August 03, 2020

Anthos GKE on AWS

Anthos GKE on AWS 1.4.1-gke.15 clusters will experience a memory leak that results in an unresponsive cluster. A fix for this issue is in development.

If you are planning to deploy an Anthos GKE on AWS cluster, wait until the fix is ready.

Cloud Asset Inventory

k8s.io/Node fields deprecation

The following two fields for assets of k8s.io/Node are now deprecated in the exported output of Cloud Storage and BigQuery.

  • metadata.resourceVersion
  • status.conditions.lastHeartbeatTime
Cloud Composer
  • New versions of Cloud Composer images: composer-1.11.1-airflow-1.10.3, composer-1.11.1-airflow-1.10.6, and composer-1.11.1-airflow-1.10.9. The default is composer-1.11.1-airflow-1.10.6. Upgrade your Cloud SDK to use features in this release.
  • Composer now enforces iam.serviceAccounts.actAs permission checks on the service account specified during Composer environment creation. See Creating environments for details.
  • Private IP environments can now be creating using non-rfc 1918 CGN ranges (100.64.0.0/10)
  • New PyPi packages have been added for Composer version composer-1.11.0-airflow-1.10.6. These make it possible to install apache-airflow-backport-providers-google with no additional package upgrades.
  • The PyPi package google-cloud-datacatalog can now be installed on Composer environments running Airflow 1.10.6 and Python 3.
  • Cloud Composer 1.11.1+: Backport providers are installed by default for Airflow 1.10.6 and 1.10.9.
  • You can now use the label.worker_id filter in Cloud Monitoring logs to see logs sent out of a specific Airflow worker Pod.
  • With the Composer Beta API, you can now upgrade an environment to any of the three latest Composer versions (instead of just the latest).
  • You can now modify these previously blocked Airflow configurations: [scheduler] scheduler_heartbeat_sec, [scheduler] job_heartbeat_sec, [scheduler] run_duration
  • A more informative error message was added for environment creation failures caused by issues with Cloud SQL instance creation.
  • Improved error reporting has been added for update operations that change the web server image in cases where the error occurs before the new web server image is created.
  • The Airflow-worker liveness check has been changed so that a task just added to a queue will not fire an alert.
  • Reduced the amount of non-informative logs thrown by the environment in Composer 1.10.6.
  • Improved the syncing procedure for env_var.json in Airflow 1.10.9 (it should no longer throw "missing file:" errors).
  • Airflow-worker and airflow-scheduler will no longer throw "missing env_var.json" errors in Airflow 1.10.6.
Cloud Logging

Alpha release: You can now use Logs Buckets to centralize or divide your logs based on your needs. For information about this feature, refer to the Managing logs buckets guide. To participate in the alpha or to get notified when Logs Buckets goes beta, fill out the sign up form.

Cloud Run

When setting up Continuous Deployment in the Cloud Run user interface, you can now select a repository that contains Go, Node.js, Python Java or .NET Core code. It will be built using Google Cloud Buildpacks without needing a Dockerfile.

Compute Engine

You can now access C2 machine types in the following zones: Taiwan: asia-east1-a, Singapore: asia-southeast1-a, Sao Paulo: southamerica-east1-b,c, and Oregon: us-west1-b. For more information, see VM instance pricing.

Dataproc

Dataproc users are required to have service account ActAs permission to deploy Dataproc resources, for example, to create clusters and submit jobs. See Managing service account impersonation for more information.

Opt-in for existing Dataproc customers: This change does not automatically apply to current Dataproc customers without ActAs permission. To opt in, see Securing Dataproc, Dataflow, and Cloud Data Fusion.

July 31, 2020

BigQuery

Updated version of Magnitude Simba ODBC driver includes performance improvements and bug fixes.

Cloud Functions

Cloud Functions is now available in the following regions:

  • asia-south1 (Mumbai)
  • asia-southeast2 (Jakarta)
  • asia-northeast3 (Seoul)

See Cloud Functions Locations for details.

Compute Engine

N2D machine types are now available in asia-east1 in all three zones. For more information, see the VM instance pricing page.

Config Connector

Add support for ArtifactRegistryRepository

Changes DataflowJob to allow for spec.parameters and spec.ipConfiguration to be updateable

Fixes issue that was causing ContainerNodePool and SQLDatabase to display UpdateFailed due to the referenced ContainerCluster or SQLDatabase not being ready

Fixes issue preventing the creation of BigQuery resources that read from Google Drive files due to insufficient OAuth 2.0 scopes

Fixes issue causing SourceRepoRepository to constantly update even when there were no changes

Dataproc

Enabled Kerberos automatic-configuration feature. When creating a cluster, users can enable Kerberos by setting the dataproc:kerberos.beta.automatic-config.enable cluster property to true. When using this feature, users do not need to specify the Kerberos root principal password with the --kerberos-root-principal-password and --kerberos-kms-key-uri flags.

New sub-minor versions of Dataproc images: 1.3.65-debian10, 1.3.65-ubuntu18, 1.4.36-debian10, 1.4.36-ubuntu18, 1.5.11-debian10, 1.5.11-ubuntu18, 2.0.0-RC7-debian10, and 2.0.0-RC7-ubuntu18.

1.3+ images (includes Preview image):

  • HADOOP-16984: Added support to read history files only from the done directory.

  • MAPREDUCE-7279: Display the Resource Manager name on the HistoryServer web page.

  • SPARK-32135: Show the Spark driver name on the Spark history web page.

  • SPARK-32097: Allow reading Spark history log files via the Spark history server from multiple directories.

Images 1.3 - 1.5:

  • HIVE-20600: Fixed Hive Metastore connection leak.

Images 1.5 - 2.0 preview:

Fixed an issue where optional components that depend on HDFS failed on single node clusters.

Fixed an issue that caused workflows to be stuck in the RUNNING state when managed clusters (created by the workflow) were deleted while the workflow was running.

Identity and Access Management

We are delaying the upcoming changes for deleted members that are bound to a role. These changes will take effect starting on September 14, 2020.

Storage Transfer Service

Transfers from Microsoft Azure Blob Storage are now generally available.

July 30, 2020

Anthos

Anthos 1.3.3 is now available.

Updated components:

Anthos Config Management

Updated the git-sync image to fix security vulnerability CVE-2019-5482.

Anthos GKE on-prem

Anthos GKE on-prem 1.3.3-gke.0 is now available. To upgrade, see Upgrading GKE on-prem. GKE on-prem 1.3.3-gke.0 clusters run on Kubernetes 1.15.12-gke.9.

Fixes:

Cloud Composer

Cloud Composer is now available in Osaka (asia-northeast2).

Cloud Logging

The Logs field explorer panel is now generally available (GA). To learn more, see the Logs field explorer section on Logs Viewer (Preview) interface page.

Cloud Run

You can now tag Cloud Run revisions. Tagged revisions get a dedicated URL allowing developers to reach these specific revisions without needing to allocate traffic to it.

Cloud Spanner

The Cloud Spanner emulator is now generally available, enabling you to develop and test applications locally. For more information, see Using the Cloud Spanner Emulator.

Compute Engine

When creating patch jobs, you can now choose whether to deploy zones concurrently or one at a time. You can also now specify a disruption budget for your VMs. For more information, see Patch rollout options.

N2 machines are now available in Sao Paulo southamerica-southeast1 in all three zones. For more information, see VM instance pricing.

You can access m2-megamem memory-optimized machine types in all zones that already have m2-ultramem memory-optimized machine types. These two machine types have also been added to asia-south1-b. You can use m1-ultramem machine types in asia-south1-a. To learn more, read Memory-optimized machine type family.

Dialogflow

GA (general availability) launch of mega agents.

Beta launch of the Facebook Workplace integration.

Network Intelligence Center

Network Topology no longer supports infrastructure segments. This feature is deprecated and will be completely removed after 90 days. If you have any questions, see Getting support.

July 28, 2020

Compute Engine

Improved validation checks will be introduced on API calls to compute.googleapis.com starting on August 3, 2020 to increase reliability and REST API compliance of the Compute Engine platform for all users. Learn how to Validate API Requests to ensure your requests are properly formed.

Memorystore for Redis

Support for VPC Service Controls on Memorystore for Redis is now Generally Available.

Migrate for Anthos

The migctl migration cleanup command has been removed and is no longer necessary.

In previous releases, you used a command in the form: migctl source create ce my-ce-src --project my-project --zone zone to create a migration for Compute Engine. The --zone option has been removed when creating a Compute Engine migration. Using the --zone option in this release causes an error.

The migctl migration logs command has been removed. You now use the Google Console to view logs.

Added the new --json-key sa.json option to the migctl source create ce command to create a migration for Compute Engine, where sa.json specifies a service account. See Optionally creating a service account when using Compute Engine as a migration source for more.

To edit the migration plan, you must now use the migctl migration get my-migration command to download the plan. After you are done editing the plan, you have to upload it by using the migctl migration update my-migration command. See Customizing a migration plan for more.

Added support for Anthos GKE on-prem clusters running on VMware. On-prem support lets you migrate source VM workloads in a vCenter/vSphere environment to a GKE on-prem cluster running in the same vCenter/vSphere environment. See Migration prerequisites for the requirements for on-prem migration.

The Google Cloud Console provides a web-based, graphical user interface that you can use to manage your Google Cloud Console projects and resources. Migrate for Anthos now supports the migration of workloads by using the Google Cloud Console.

In this release, the Migrate for Anthos on the Cloud Consoledoes not support migrations for Windows or for on-prem, including monitoring Windows or on-prem migrations.

Migrate for Anthos now includes Custom Resource Definitions (CRDs) that enable you to easily create and manage migrations using an API. For example, you can use these CRDs to build your own automated tools.

Added the node-selectors and tolerations options to the migctl setup install installation command that lets you install Migrate for Anthos on a specific set of nodes or node pools in a cluster. See Installing Migrate for Anthos.

You can use Migrate for Anthos to migrate Windows VMs to workloads on GKE. This process clones your Compute Engine VM disks and uses the clone to generate artifacts (including a Dockerfile and a zip archive with extracted workload files and settings) you can use to build a deployable GKE image. See Adding a Windows migration source.

160309992: Editing a migration plan from the GUI console might fail if it was also edited using migctl.

161135630: Attempting multiple migrations of the same remote VM (from VMware, AWS or Azure) simultaneously, might result in a stuck migration process.

Workaround: Delete the stuck migration.

161214397: For Anthos on-prem, in case of a missing service-account to upload container images to the Container Registry, the migration might get stuck.

Workaround: Add the service-account. If you are using the Migrate for Anthos CRD API, delete the GenerateArtifactsTask and recreate it. If using the migctl CLI tool, delete the migration and recreate it. You can first download the migration YAML using migctl migration get to back up any customizations you have made.

161110816: migctl migration create with a source that doesn't exist fails with a non-informative error message: request was denied.

161104564: Creating a Linux migration with wrong os-type specification causes the migration process to get stuck until deleted.

160858543, 160836394, 160844377, 154430477, 154403665, 153241390, 153239696, 152408818, 151516642, 132002453: Unstable network in Migrate for Anthos infrastructure, or a GKE node restart, might cause migration to get stuck.

Workaround: Delete the migration and re-create it. If recreating the migration does not solve the issue, please contact support.

161787358: In some cases, upgrading from version v1.3 to v1.4 might fail with Failed to convert source message.

Workaround: Re-run the upgrade command.

153811691, 153439420: Migrate for Anthos support for older Java does not handle OpenJDK 7 and 8 CPU resource calculations.

152974631: Using GKE nodes with CPU and Memory configurations below the recommended values might cause migrations to get stuck.

GKE on-prem preview: If a source was created with migctl source create using the wrong credentials, you could not delete the migration with migctl migration delete. This issue has been fixed in the GA release of on-prem support.

In version 1.4, by default Migrate for Anthos installs to and performs migrations in the v2k-system namespace. In previous release, you could specify the namespace. The option to specify a namespace has been removed.

157890913, 160082702, 161125635, 159693579: A migration might continue to indicate that it is running, while an issue encountered prevents further processing.

Workaround: Check event messages on the migration object using the verbose migctl status command: migclt migration status migration_name -v. You might be able to correct the issue to allow the migration to continue or the migration should be deleted and recreated if an Error event is listed without further retries.

An example is when creating a Windows migration on a cluster with no Windows nodes. In this case the event message will show: Warning FailedScheduling 10s Pod discover-xyz 0/1 nodes are available: 1 node(s) didn't match node selector.

VPC Service Controls

General availability for the following integration:

July 27, 2020

BigQuery

INFORMATION_SCHEMA views for streaming metadata are now in alpha. You can use these views to retrieve historical and real-time information about streaming data into BigQuery.

Cloud Run

Cloud Run is now available in asia-southeast1 (Singapore)

Dataflow

Dataflow now supports Dataflow Shuffle, Streaming Engine, FlexRS, and the following regional endpoints in GA:

  • northamerica-northeast1 (Montréal)
  • asia-southeast1 (Singapore)
  • australia-southeast1 (Sydney)
Dialogflow

Beta launch of Dialogflow Messenger. This new integration provides a customizable chat dialog for your agent that can be embedded in your website.

Security Command Center

Security Command Center v1beta1 API will be disabled on Jan. 31, 2021. All users will be required to migrate to Security Command Center v1 API, which is now in general availability.

  • Update to Google-provided v1 API client libraries.
  • Move your client libraries and HTTP/grpc calls to v1 by following instructions in the reference documentation for service endpoints and SDK configuration.
  • If you call this service using your own libraries, follow the guidance in our Security Command Center API Overview when making API requests.
  • To use ListFindings calls in the v1 API, update your response handling to respond to an extra layer of object nesting, as shown below:
    • v1beta1: response.getFindings().forEach( x -> ....)
    • v1: response.getListFindingsResults().forEach(x -> { x.getFinding(); .... })

Additional changes to the v1 API are listed below. Learn more about Using the Security Command Center API.

The SeverityLevel finding source property for all Security Health Analytics findings will be removed and replaced with a field named Severity, which retains the same values. * Impact: Finding notification filters, post-processing, and alerting based on the SeverityLevel finding source property will no longer be possible. * Recommendation: Replace the SeverityLevel finding source property with the Severity finding attribute property to retain existing functionality.

The nodePools finding source property will be removed from the OVER_PRIVILEGED_SCOPES findings and replaced with a source property named VulnerableNodePools. * Impact: Finding notification filters, post-processing and alerting based on this finding source property may fail. * Recommendation: Modify workflows as necessary to utilize the new VulnerableNodePools source property.

The finding category of 2SV_NOT_ENFORCED is being renamed MFA_NOT_ENFORCED. * Impact: Case-sensitive finding notification filters, post-processing, and alerting based on the previous finding category name may fail. * Recommendation: Update any post-processing to use the new category name.

The ExceptionInstructions source property will be removed from all Security Health Analytics findings. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property may fail. * In progress: A new property that will indicate the current state of findings is being developed.

The ProjectId source property from all Security Health Analytics findings will be removed. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property may fail. * Recommendation: Update workflows to utilize the project id in the resource.project_display_name field of a ListFindingsResult.

The AssetSettings finding source property from PUBLIC_SQL_INSTANCE, SQL_PUBLIC_IP, SSL_NOT_ENFORCED, AUTO_BACKUP_DISABLED, SQL_NO_ROOT_PASSWORD, SQL_WEAK_ROOT_PASSWORD finding types will be removed, as it contains data duplicated from the asset entity. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will fail. * Recommendation: Replacing the AssetSettings finding source property with the Settings resource property from the asset underlying the finding will retain existing functionality.

The Allowed finding source property from OPEN_FIREWALL findings will be replaced with changed a new field named ExternallyAccessibleProtocolsAndPorts, which will contain a subset of the values from the Allowed property. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will fail. * Recommendation: Modify your workflows as necessary to utilize the new ExternallyAccessibleProtocolsAndPorts source property.

The SourceRanges finding source property from findings in OPEN_FIREWALL findings will be replaced with a new ExternalSourceRanges, which will contain a subset of the values from the SourceRanges property. * Impact: Finding notification filters, post-processing and alerting based on the finding source property will fail. * Recommendation: Modify your workflows as necessary to utilize the new ExternalSourceRanges source property.

As of Jan. 31, 2021, the UpdateFinding API will no longer support storing string properties that are longer than 7,000 characters. * Impact: Calls to UpdateFinding that seek to store string properties longer than 7,000 characters will be rejected with an invalid argument error. * Recommendation: Consider storing string properties longer than 7,000 characters as JSON structs or JSON lists. Learn more about writing findings.

As of Sept. 1, 2020, the ListFindings API will no longer support searching on finding properties that are longer than 7,000 characters. * Impact: Searches on strings that are longer than 7,000 characters will not return expected results. For example, if a partial string match filter has a match at the 7,005th character on a property in a finding, that finding will not be returned because that match is past the 7,000-character threshold. An exception will not be returned. * Recommendation: Customers can remove filter restrictions (e.g. x : "some-value") that are supposed to match very long properties. The results can then be filtered locally to remove findings whose strings do not match designated criteria. Learn more about filtering findings.

The OffendingIamRoles source property in extensions of IAM Scanner Configurations will use structured data instead of a JSON-formatted string. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will need to be updated to take advantage of the new data type on findings of the following categories: ADMIN_SERVICE_ACCOUNT, NON_ORG_IAM_MEMBER, PRIMITIVE_ROLES_USED, OVER_PRIVILEGED_SERVICE_ACCOUNT_USER, REDIS_ROLE_USED_ON_ORG, SERVICE_ACCOUNT_ROLE_SEPARATION, KMS_ROLE_SEPARATION. * Recommendation: Update workflows to utilize the new data type.

The QualifiedLogMetricNames source property in specific Monitoring findings from Security Health Analytics will use a list instead of a character-separated string value. * Impact: Finding notification filters, post-processing and alerting based on the finding source property will need to be updated to take advantage of the new data type for findings of the following categories: AUDIT_CONFIG_NOT_MONITORED, BUCKET_IAM_NOT_MONITORED, CUSTOM_ROLE_NOT_MONITORED, FIREWALL_NOT_MONITORED, NETWORK_NOT_MONITORED, OWNER_NOT_MONITORED, ROUTE_NOT_MONITORED, SQL_INSTANCE_NOT_MONITORED. * Recommendation: Update workflows to utilize the new data type.

The AlertPolicyFailureReasons source property in specific Monitoring findings from Security Health Analytics will use a list instead of a character-separated string value. * Impact: Finding notification filters, post-processing and alerting based on the finding source property will need to be updated to take advantage of the new data type for findings of the following categories: AUDIT_CONFIG_NOT_MONITORED, BUCKET_IAM_NOT_MONITORED, CUSTOM_ROLE_NOT_MONITORED, FIREWALL_NOT_MONITORED, NETWORK_NOT_MONITORED, OWNER_NOT_MONITORED, ROUTE_NOT_MONITORED, SQL_INSTANCE_NOT_MONITORED. * Recommendation: Update workflows to utilize the new data type.

The CompatibleFeatures source property in WEAK_SSL_POLICY findings will use a list instead of a character-separated string value. * Impact: Finding notification filters, post-processing, and alerting based on the finding source property will need to be updated to take advantage of the new data type for findings. * Recommendation: Update workflows to utilize the new data type.

July 25, 2020

Cloud Load Balancing

The introductory period during which you could use Internal HTTP(S) Load Balancing without charge has ended. Starting July 25, 2020, your usage of Internal HTTP(S) Load Balancing will be billed to your project.

July 24, 2020

Anthos GKE on AWS

Anthos GKE on AWS is now generally available.

Clusters support in-place upgrades, with the ability to upgrade the control plane and node pools separately.

Clusters can be deployed in a high availability (HA) configuration, where control plane instances and node pools are spread across multiple availability zones.

Clusters have been validated to support up to 200 nodes and 6000 pods.

Allows the number of nodes to be scaled dynamically based on traffic volume to increase utilization and reduce cost, and improve performance

Anthos can be deployed within existing AWS VPCs, leveraging existing security groups to secure those clusters. Customers can ingress traffic using NLB and ALBs. Additionally Anthos on AWS supports AWS IAM and OIDC. This makes deploying Anthos easy, eliminates the need to provision new accounts, and minimizes configuration of the environment.

With Anthos Config Management enterprises can set policies on their AWS workloads and with Anthos Service Mesh, they can monitor, manage, and secure them.

Kubernetes settings (flags and sysctl settings) have been updated to match GKE.

Upgrades from beta versions are not supported. To install Anthos GKE on AWS, you must remove your user and management clusters, then reinstall them.

Anthos Service Mesh

Anthos Service Mesh on GKE on AWS is supported.

For more information, see Installing Anthos Service Mesh on GKE on AWS.

BigQuery

BigQuery Data Transfer Service is now available in the following regions: Montréal (northamerica-northeast1), Frankfurt (europe-west3), Mumbai (asia-south1), and Seoul (asia-northeast3).

BigQuery Data Transfer Service

BigQuery Data Transfer Service is now available in the following regions: Montréal (northamerica-northeast1), Frankfurt (europe-west3), Mumbai (asia-south1), and Seoul (asia-northeast3).

Cloud Composer
  • New versions of Cloud Composer images: composer-1.11.0-airflow-1.10.2, composer-1.11.0-airflow-1.10.3, composer-1.11.0-airflow-1.10.6, and composer-1.11.0-airflow-1.10.9. The default is composer-1.11.0-airflow-1.10.3. Upgrade your Cloud SDK to use features in this release.
  • Airflow 1.10.9 is now supported.
  • Environment upgrades have been enabled for the latest two Composer versions (1.11.0 and 1.10.6).
  • Added a retry feature to the Airflow CeleryExecutor (disabled by default). You can configure the number of times Celery will attempt to execute a task by setting the [celery] max_command_attempts property. The delay between each retry can also be adjusted with [celery] command_retry_wait_duration (default: 5 seconds).
  • New PyPi packages have been added for Composer version composer-1.11.0-airflow-1.10.6. These make it possible to install apache-airflow-backport-providers-google with no additional package upgrades.
  • The PyPi package google-cloud-datacatalog can now be installed on Composer environments running Airflow 1.10.6 and Python 3.
  • Fixed synchronization of environment variables to the web server.
  • Improved error reporting when PyPI package installation fails.
  • Composer versions 1.6.1, 1.7.0, and 1.7.1 are now deprecated.
Compute Engine
  • NVIDIA® Tesla® T4 GPUs are now available in the following additional regions and zones:

    • Ashburn, Northern Virginia, USA: us-east4-b

    For information about using T4 GPUs on Compute Engine, see GPUs on Compute Engine.

N2 machines are now available in Northern Virginia us-east4-c. Read more information on the VM instance pricing page.

Data Catalog

Data Catalog is now available in Seoul (asia-northeast3).

Dataproc

Terminals started in Jupyter and JupyterLab now use login shells. The terminals behave as if you SSH'd into the cluster as root.

Upgraded the jupyter-gcs-contents-manager package to the latest version. This upgrade includes a bug fix to a 404 (NOT FOUND) error message that was issued in response to an attempt to create a file in the virtual top-level directory instead of the expected 403 (PERMISSION DENIED) error message.

New sub-minor versions of Dataproc images: 1.3.64-debian10, 1.3.64-ubuntu18, 1.4.35-debian10, 1.4.35-ubuntu18, 1.5.10-debian10, 1.5.10-ubuntu18, 2.0.0-RC6-debian10, and 2.0.0-RC6-ubuntu18.

Fixed a bug in which the HDFS DataNode daemon was enabled on secondary workers but not started (except on VM reboot if started automatically by systemd).

Fixed a bug in which StartLimitIntervalSec=0 appeared in the Service section instead of the Unit section for systemd services, which disabled rate limiting for retries when systemd restarted a service.

July 23, 2020

Anthos Anthos Config Management

Config Connector has been updated in Anthos Config Management to version 1.13.1.

Anthos Config Management now includes Hierarchy Controller as a beta feature. For more information on this component, see the Hierarchy Controller overview.

Policy Controller users may now enable --log-denies to log all denies and dryrun failures. This is useful when trying to see what is being denied or fails dry-run and for keeping a log to debug cluster problems without looking through the status of all constraints. This is configured by setting spec.policyController.logDeniesEnabled: true in the configuration file for the Operator. There is an example in the section on Installing Policy Controller.

This release includes several logging and performance improvements.

This release includes several fixes and improvements for the nomos command line utility.

The use of unsecured HTTP for GitHub repo connections or in an http_proxy is now discouraged, and support for unsecured HTTP will be removed in a future release. HTTPS will continue to be supported for GitHub repo and local proxy connections.

This release improves the handling of GitHub repositories with very large histories.

Prior to this release, Config Sync and kubectl controllers and processes used the same annotation (kubectl.kubernetes.io/last-applied-configuration) to calculate three-way merge patches. The shared annotation sometimes resulted in resource fights, causing unnecessary removal of each other's fields. Config Sync now uses its own annotation, which prevents resource clashes.

In most cases, this change will be transparent to you. However, there are two cases where some previously unspecified behavior will change.

The first case is when you have run kubectl apply on an unmanaged resource in a cluster, and you later add that same resource to the GitHub repo. Previously, Config Sync would have pruned any fields that were previously applied but not declared in the GitHub repo. Now, Config Sync writes the declared fields to the resource and leaves undeclared fields in place. If you want to remove those fields, do one of the following:

  • Get a local copy of the resource from GitHub and kubectl apply it.
  • Use kubectl edit --save-config to remove the fields directly.

The second case is when you stop managing a resource on the cluster or even stop all of Config Sync on a cluster. In this case, if you want to prune fields from a previously managed resource, you will see different behavior. Previously, you could get a local copy of the resource from GitHub, remove the unwanted fields, and kubectl apply it. Now, kubectl apply no longer prunes the missing fields. If you want to remove those fields, do one of the following:

  • Call kubectl apply set-last-applied with the unmodified resource from GitHub, then remove unwanted fields and kubectl apply it again without the set-last-applied flag.
  • Use kubectl edit --save-config to remove the fields directly.

In error messages, links to error docs are now more concise.

Anthos GKE on-prem

Anthos GKE on-prem 1.4.1-gke.1 is now available. To upgrade, see Upgrading GKE on-prem. GKE on-prem 1.4.1-gke.1 clusters run on Kubernetes 1.16.9-gke.14.

Anthos Identity Service LDAP authentication is now available in Alpha for GKE on-prem

Contact support if you are interested in a trial of the LDAP authentication feature in GKE on-prem.

Support for F5 BIG-IP load balancer credentials update

This preview release enables customers to manage and update the F5 BIG-IP load balancer credentials by using the gkectl update credentials f5bigip command.

Functionality changes:

  • The Ubuntu image is upgraded to include the newest packages.
  • Preflight checks are updated to validate that the gkectl version matches the target cluster version for cluster creation and upgrade.
  • Preflight checks are updated to validate the Window OS version used for running gkeadm. The gkeadm command-line tool is only available for Linux, Windows 10, and Windows Server 2019.
  • gkeadm is updated to populate network.vCenter.networkName in both admin cluster and user cluster configuration files.

Fixes:

  • Removed the static IP used by admin workstation after upgrade from ~/.ssh/known_hosts to avoid manual workaround.
  • Resolved a known issue that network.vCenter.networkName is not populated in the user cluster configuration file during user cluster creation.
  • Resolved a user cluster upgrade–related issue to only wait for the machines and pods in the same namespace within the cluster to be ready to complete the cluster upgrade.
  • Updated the default value for ingressHTTPNodePort and ingressHTTPSNodePort in the loadBalancer.manualLB section of the admin cluster configuration file.
  • Fixed CVE-2020-8558 and CVE-2020-8559 described in Security bulletins.
  • Logging and monitoring: Resolved an issue that stackdriver-log-forwarder was not scheduled on the master node on the admin cluster.
  • Resolved the following known issues published in the 1.4.0 release notes:
    • If a user cluster is created without any node pool named the same as the cluster, managing the node pools using gkectl update cluster would fail. To avoid this issue, when creating a user cluster, you need to name one node pool the same as the cluster.
    • The gkectl command might exit with panic when converting config from "/path/to/config.yaml" to v1 config files. When that occurs, you can resolve the issue by removing the unused bundled load balancer section ("loadbalancerconfig") in the config file.
    • When using gkeadm to upgrade an admin workstation on Windows, the info file filled out from this template needs to have the line endings converted to use Unix line endings (LF) instead of Windows line endings (CRLF). You can use Notepad++ to convert the line endings.
    • When running a preflight check for config.yaml that contains both admincluster and usercluster sections, the "data disk" check in the "user cluster vCenter" category might fail with the message: [FAILURE] Data Disk: Data disk is not in a folder. Use a data disk in a folder when using vSAN datastore. User clusters don't use data disks, and it's safe to ignore the failure.
    • When upgrading the admin cluster, the preflight check for the user cluster OS image validation will fail. The user cluster OS image is not used in this case, and it's safe to ignore the "User Cluster OS Image Exists" failure in this case.
    • User cluster creation and upgrade might be stuck with the error: Failed to update machine status: no matches for kind "Machine" in version "cluster.k8s.io/v1alpha1". To resolve this, you need to delete the clusterapi pod in the user cluster namespace in the admin cluster.

Known issues:

  • During reboots, the data disk is not remounted on the admin workstation when using GKE on-prem 1.4.0 or 1.4.1 because the startup script is not run after the initial creation. To resolve this, you can run sudo mount /dev/sdb1 /home/ubuntu.
App Engine standard environment Go App Engine standard environment Java App Engine standard environment Node.js App Engine standard environment PHP App Engine standard environment Python App Engine standard environment Ruby Cloud Billing

Export your Cloud Billing account SKU prices to BigQuery. You can now export your pricing information for Google Cloud and Google Maps Platform SKUs to BigQuery. Exporting your pricing data allows you to audit, analyze, and/or join your pricing data with your exported cost data. The pricing export includes list prices, pricing tiers, and, when applicable, any promotional or negotiated pricing. See the documentation for more details.

Cloud Functions Cloud Run Dialogflow

Amazon Alexa importer and exporter are no longer supported.

Network Intelligence Center

Network Topology includes two new metrics for connections between entities: packet loss and latency. Additionally, you can now use a drop-down menu to select which metric Network Topology overlays on traffic paths. For more information, see Viewing metrics for traffic between entities and Network Topology metrics reference.

Virtual Private Cloud

Serverless VPC Access support for Shared VPC is now available in Beta.

July 22, 2020

Anthos Service Mesh

1.6.5-asm.7, 1.5.8-asm.7, and 1.4.10-asm.15 are now available

This release provides these features and fixes:

  • Builds Istiod (Pilot), Citadel Agent, Pilot Agent, Galley, and Sidecar Injector with Go+BoringCrypto.
  • Builds Istio Proxy (Envoy) with the --define boringssl=fips option.
  • Ensures the components listed above use FIPS-compliant algorithms.
Cloud Bigtable

Cloud Bigtable's fully integrated backups feature is now generally available. Backups let you save a copy of a table's schema and data and restore the backup to a new table at a later time.

July 21, 2020

AutoML Video Intelligence Object Tracking

In April 2020, a model upgrade for the AutoML Video Object Tracking feature was released. This release is for non-downloadable models only. Models trained after April 2020 may show improvements in the evaluation results.

Cloud Run

Cloud Run resources are now available in Cloud Asset Inventory

Compute Engine

You can now create balanced persistent disks , in addition to standard and SSD persistent disks. Balanced persistent disks are an alternative to SSD persistent disks that balance performance and cost. For more information, see Persistent disk types.

Config Connector

bug fixes and performance improvements

Istio on Google Kubernetes Engine

Istio 1.4.10-gke.4

Fixes known security issues with the same fixes as OSS Istio 1.4.10

Recommendations AI

Recommendations AI public beta

Recommendations AI is now in public beta.

New pricing available

Pricing for Recommendations AI has been updated for public beta. For new pricing and free trial details, see Pricing.

UI redesign

The Recommendations AI console has a new look. You'll see a new layout, including a redesigned dashboard and improved alerts setup.

New support resources

We have new support resources available:

See Getting support for all support resources.

New FAQ page

A Frequently Asked Questions page is now available. See the FAQ here.

Traffic Director

Traffic Director supports proxyless gRPC applications in General Availability. In this deployment model, gRPC applications can participate in a service mesh without needing a sidecar proxy.

July 20, 2020

AI Platform Training

You can now train a PyTorch model on AI Platform Training by using a pre-built PyTorch container. Pre-built PyTorch containers are available in beta.

Cloud Storage Data Catalog

Data Catalog is now available in Salt Lake City (us-west3) and Las Vegas (us-west4).

Identity and Access Management

We are delaying the upcoming changes for deleted members that are bound to a role. These changes will take effect starting on August 31, 2020.

Resource Manager

The Organization Policy for enabling detailed Cloud Audit Logs has launched into general availability.

Secret Manager

Secret Manager adds support for the following curated Cloud IAM roles:

  • Secret Manager Secret Version Adder (roles/secretmanager.secretVersionAdder )
  • Secret Manager Secret Version Manager (roles/secretmanager.secretVersionManager)

To learn more, see IAM and access control.

VPC Service Controls

General availability for the following integration:

July 17, 2020

App Engine standard environment Java
  • Updated Java SDK to version 1.9.81
AutoML Translation

For test data, added support for the .tmx file type when evaluating existing models. For more information, see Evaluating models.

Compute Engine

The Organization Policy for restricting protocol forwarding creation has launched into Beta.

Dataproc

Dataproc now uses Shielded VMs for Debian 10 and Ubuntu 18.04 clusters by default.

The Proxy-Authorization header is accepted in place of Authorization to authenticate to Component Gateway to the backend for programmatic API calls. If Proxy-Authorization is set to a bearer token, Component Gateway will forward the Authorization header if it does not contain a bearer token.

For example, this allows setting both Proxy-Authorization: Bearer <google-access-token> as well as setting Authorization: Basic ... to authenticate to HiveServer2 with HTTP basic auth.

Added support for Zeppelin Spark and shell interpreters in Kerberized clusters by default.

New sub-minor versions of Dataproc images: 1.3.63-debian10, 1.3.63-ubuntu18, 1.4.34-debian10, 1.4.34-ubuntu18, 1.5.9-debian10, 1.5.9-ubuntu18, 2.0.0-RC5-debian10, and 2.0.0-RC5-ubuntu18.

Image 2.0 preview:

If a project's regional Dataproc staging bucket is manually deleted, it will be recreated automatically when a cluster is subsequently created in that region.

Resource Manager

The Organization Policy for restricting protocol forwarding creation has launched into public beta.

July 16, 2020

BigQuery

BigQuery GIS now supports two new functions, ST_CONVEXHULL and ST_DUMP:

  • ST_CONVEXHULL returns the smallest convex GEOGRAPHY that covers the input.
  • ST_DUMP returns an ARRAY of simple GEOGRAPHYs where each element is a component of the input GEOGRAPHY.

For more information, see the ST_CONVEXHULL and ST_DUMP reference pages.

Cloud Data Fusion

Cloud Data Fusion version 6.1.3 is now available. This version includes performance improvements and minor bug fixes.

  • Improved performance of Joiner plugins, aggregators, program startup, and previews.
  • Added support for custom images. You can select a custom Dataproc image by specifying the image URI.
  • Added support for rendering large schemas (>1000 fields) in the pipelines UI.
  • Added payload compression support to the messaging service.
Cloud Load Balancing

The Organization Policy for restricting load balancer creation has launched into Beta.

Compute Engine

SSD persistent disks on certain machine types now have a maximum write throughput of 1,200 MB/s. To learn more about the requirements to reach these limits, see Block storage performance.

You can now suspend and resume your VM instances. This feature is available in Beta.

Config Connector

Add support for allowing fields not specified by the user to be externally-managed (i.e. changeable outside of Config Connector). This feature can be enabled for a resource by enabling K8s server-side apply for the resource, which will be the default for all K8s resources starting in K8s 1.18. More detailed docs about the feature coming soon.

Operator improvement: add support for cluster-mode set-ups, which allows users to use one Google Service Account for all namespaces in their cluster. This is very similar to the traditional "Workload Identity" installation set-up.

Fix ContainerCluster validation issue (Issue #242).

Fix OOM issue for the cnrm-resource-stats-recorder pod (Issue #239).

Add support for projectViewer prefix for members in IAMPolicy and IAMPolicyMember (Issue #234).

Reduce spec.revisionHistoryLimit for the cnrm-stats-recorder and cnrm-webhook-manager Deployments from 10 (the default) to 1.

July 15, 2020

AutoML Vision Image Classification (ICN)

TFLite Edge model update

TFLite edge models are now enhanced with metadata. Models trained in the next 6 months will be backwards compatible as separate metadata and label files are included. TFLite models trained after this time may not be backwards compatible.

For more information see:

BigQuery ML

Data split and validation options are now available for AutoML Table model training.

Cloud Data Loss Prevention

Added infoType detector:

  • ISRAEL_IDENTITY_CARD_NUMBER
Cloud Functions

Cloud Functions has added support for a new runtime, Node 12, in Beta.

Cloud Functions has added support for a new runtime, Python 3.8, in Beta.

Cloud Spanner

You can now run SQL queries to retrieve read statistics for your database over recent one-minute, 10-minute, and one-hour time periods.

July 14, 2020

AI Platform Prediction

VPC Service Controls now supports AI Platform Prediction. Learn how to use a service perimeter to protect online prediction. This functionality is in beta.

Artifact Registry

You can now use Customer-Managed Encryption Keys (CMEK) to protect repository data in Artifact Registry. For more information, see Enabling customer-managed encryption keys.

Cloud Key Management Service

Cloud HSM resources are available in the us-west4 and asia-southeast2 regions. Cloud KMS resources were already available in these regions.

For information about which Cloud Locations are supported by Cloud KMS, Cloud HSM, and Cloud EKM, see the Cloud KMS regional locations.

VPC Service Controls

Beta stage support for the following integration: