README - SPILLBOX CLOUD PRODUCT (SPL-CL) Spillbox is a hybrid Cloud product that bursts your on-premise compute job on a Linux/MacOS platform (including ARM processor) in the Cloud. Just prepending "splrun run" to your existing on-premise command line, it will use compute from Cloud using your local data. "splrun run ..." is run on on-premise and data is automatically managed by Spillbox tools. // runs and executes locally splrun run // command runs locally but executed in the Cloud Spillbox can also burst to physical remote datacenter using SPL-DC product. Please contact Spillbox team at support@spillbox.io (1-833-SPILLBX) for more details on SPL-DC product. Above Cloud bursting is achieved in a few simple steps ( alternately you can download a demo curl -o run_example.sh https://s3.us-west-1.amazonaws.com/spillbox.io-public/demo/run_example.sh and run all of below commands/steps and more through a single script): 1. Download the executable and untar it. curl -o splfs.tar.gz https://s3-us-west-1.amazonaws.com/spillbox.io-public-linux-amd64/v1.3/splfs.tar.gz // curl -o splfs.tar.gz https://s3-us-west-1.amazonaws.com/spillbox.io-public-darwin-amd64/v1.3/splfs.tar.gz for MacOS // curl -o splfs.tar.gz https://s3-us-west-1.amazonaws.com/spillbox.io-public-linux-arm64/v1.3/splfs.tar.gz for ARM tar xvfz splfs.tar.gz The above steps should create an executable for splrun, along with a few more executables. 2. Perform a one time setup to create an account. ./splrun login Please follow the instructions in the output of the above command. Output will be there only if authentication is needed for the first time, else it would succeed quietly with no action on your part. If authentication is needed, please give an existing email address as a login and your existing auth0.com account password. For the first time user, please use the sign-up column to self signup by providing an email address and a strong password. If sign-up goes through successfully, you will get an email verification to your email address. Please check your email and verify yourself before proceeding further. In case of any problem, please contact the Spillbox support team (support@spillbox.io). If account creation fails due to access to auth0.com or spillbox.io domains, then it is possible that your IT maintains whitelisted domains and these two domains are not in the list. Please make a request to your IT to add "auth0.com" and "spillbox.io" to the whitelisted domains. You can also use a browser to authenticate. To use the browser please set export SPILLBOX_BROWSER=firefox/google-chrome/default/none/... // to one of the above values. 3. If needed, create a config file and customize your setup/requirements. If you are using Spillbox for a demo purpose, you do not need to use any config and tool will create a linux based setup with minimum resources. In this flow, it will use Spillbox credentials and you do not need to create Cloud account ( in step 4 below ) just for using Spillbox for a demo purpose. If you need to customize your setup, the following config file creates one compute node in the Cloud. cat > ./myconfig_file < below with your ARM_TENANT_ID value and run below in your browser: https://login.microsoftonline.com//oauth2/authorize?client_id=e5539ec3-de44-45d1-9ece-3f81f0fb079e&response_type=code&redirect_uri=https%3A%2F%2Fwww.microsoft.com%2F Above enables you to use Spillbox Images. 2. Now go to your subscription and give "Spillbox-Share-Image" as contributor role Steps are in document at https://docs.microsoft.com/en-us/azure/virtual-machines/windows/share-images-across-tenants . 5. Create a project. This should be done only once per project and on a host or VM having at least 4 cores and 32Gb RAM. The concept of the project will be described in later steps. The lines below create a project, myproject, using a config file, myconfig_file, created in step 3. export SPILLBOX_SERVER=aws.spillbox.io # google.spillbox.io for GCP # azure.spillbox.io for Azure ./splrun project apply myproject # without any config file ./splrun project apply myproject ./myconfig_file # with config file Above, the environment variable SPILLBOX_SERVER points to the Spillbox web server aws.spillbox.io. 6. You are now ready to run your existing executable/script at a top/leaf level in the cloud. Syntax: ./splrun run Below runs the script /depot/openssh-7.9p1/build in the cloud as a slurm job in real time. export SPILLBOX_PROJECT=myproject ./splrun run srun /depot/openssh-7.9p1/build Since you may have multiple projects in the Cloud, you need to set SPILLBOX_PROJECT to a project. Here SPILLBOX_PROJECT is set to myproject so that script /depot/openssh-7.9p1/build is run in project myproject. 7. Once you are done with your work, please do not forget to destroy the project else Cloud resources will remain occupied leading to high bills. To destroy the project, please run level in the cloud. ./splrun project destroy_all myproject COMMON QUESTIONS AND THEIR ANSWERS Q. How can I get help for the tool? Ans. At any level, you can type help to get a help message. For example: splrun help splrun run help Q. What is the format and possible entries in the config file? Ans. The format of the config file is "key" = "value", both in quotes. If the value is in the form of a list, then the list syntax is ["value1", "value2"] . Possible keys and their default values are: "worker_count" = 1 Specifies the number of compute machines. "master_count" = 1 Specifies the number of master worker compute machines for distributed workload. "region" = "us-west-1" for aws and "us-west1" for GCP Designates the Cloud region to deploy into. "zone" = us-west1-a for GCP Designates the Cloud zone to deploy into. "ami_name" = "ubuntu-1804-bionic-v20190320" on GCP and similar on AWS Please provide an AMI name if you wish to use your own ami. If your own ami is encrypted then provide kms_key_id. For AWS provide the KMS key id in the following format: "kms_key_id" = "arn:aws:kms:::key/" On Google by default, disks are encrypted by Google-managed encryption. To use your cloud KMS key, provide the KMS key in the following format: "kms_key_id" = "projects//locations//keyRings//cryptoKeys/" On Azure by default, disks are encrypted by Platform-managed key. If you want to use your own customer-managed key, provide the disk encryption ID in the following format: "kms_key_id" = "/subscriptions//resourceGroups//providers/Microsoft.Compute/diskEncryptionSets/" On Azure, you can give the ID of the AMI or marketplace ID (:::) "ami_plan" = "::" For azure only, provide the purchase plan details of the AMI "splfs_ami_name" = "ubuntu-1804-bionic-v20190320" You can provide the splfs_ami_name variable only when "job_manager" = "" . This will create a SPLFS machine using the specified custom AMI. "install_packages" = "" Please provide a list of OS packages if your application needs extra OS packages. "boot_script" = "" Run a user script just after the machine boots. "update_packages" = false Please set it to true if you want to enable upgrade of existing OS packages. "model" = "small" Other values are "medium" and "large". "medium" and "large" allocate powerful instances and are costly. Please consult the Spillbox support team when you use these values. "mount_paths" = // default is derived automatically by Spillbox This gives the list of NFS export paths on prem to be made available in the Cloud. Spillbox automatically adds by default the list of top level directories that are not part of Linux OS. User home directories are not added if they are inside a Linux OS distribution path like /home. If the Spillbox automatic NFS mount point finder is not suitable to you, please override it by providing the list. You can add a full path of subdirectories also. For example ["/depot", "/home/user"]. "exclude_mount_paths" = [] // no default exclude Provide the list of mount_paths to be excluded. "zone" = "us-west1-a" for GCP Designates the Cloud zone to deploy into. "master_iscompute" = "false" slurm controller VM is also a compute node. "splpipe_instance_type" = "" Provides an instance resource for spillbox transport server (also used as NAT instance for private subnet) "splfs_instance_type" = "" Provides an instance resource for spillbox nfs server "instance_type" = // t2.micro/xlarge/2xlarge on AWS, // g1-small/n1-standard-4/8 on GCP Provides an instance resource for master/slave slurm worker. "master_instance_type" = // t2.micro/xlarge/2xlarge on AWS, Provides an instance resource for slurm master worker for distributed workload. "slave_instance_type" = // t2.micro/xlarge/2xlarge on AWS, Provides an instance resource for slurm slave worker for distributed workload. "cloud_user" = // default is on-prem unix username "cloud_group" = // default is on-prem unix group "cloud_uid" = // default is on-prem unix uid "cloud_gid" = // default is on-prem unix gid "cloud_shell" = // default is /bin/bash "job_manager" = // default is "slurm" . On AWS, we support "slurm", "pcluster" and "ssh". "pcluster is Parallel cluster on AWS. it's default scheduler is "sge" and other values can be provided via "pcluster_scheduler". On GCP, we support "slurm", "elastic_slurm" and "ssh". In "elastic_slurm" and "pcluster", compute machines are allocated and deleted automatically as per demand. There is an upper cap on the maximum number of machines allocated at any time, determined by "worker_count". "slurm_partitions" = This is applicable to elastic_slurm only. "slurm_partitions" is an array of configurations for specifying multiple machine types residing in their own partitions. type = list(object({ name = string, machine_type = string, max_node_count = number, zone = string, compute_disk_type = string, compute_disk_size_gb = number, compute_labels = any, cpu_platform = string, gpu_type = string, gpu_count = number, network_storage = list(object({ server_ip = string, remote_mount = string, local_mount = string, fs_type = string, mount_options = string})), preemptible_bursting = bool, vpc_subnet = string, static_worker_count = number })) Example value on GCP is: slurm_partitions = [ { name = "debug" machine_type = "n1-standard-2" static_node_count = 0 max_node_count = 10 zone = "us-west1-a" compute_disk_type = "pd-standard" compute_disk_size_gb = 20 compute_labels = {} cpu_platform = null gpu_count = 0 gpu_type = null network_storage = [] preemptible_bursting = true vpc_subnet = null }, ] "exclude_filter_path" = "" // default no file/directories are excluded Excludes file/directory from getting uploaded to the Cloud. "wan_timeout" = 900 // Change WAN timeout "data_in_memory" = true // default true Spillbox can keep data on physical disk or in RAM. Data will be in memory if there is enough space in memory and data_in_memory is true. "vpc_id" = "" // bring your own vpc id "public_subnet_id" = "" // bring your own public subnet "public_security_group_id" = "" If you bring your own public subnet, provide the security group which should allow HTTPS connection with your on-prem machine and connections with the private subnet machine Ports need to opened : - Allow 443 Port for your on-prem machine "private_subnet_id" = "" // bring your own private subnet "private_security_group_id" = "" If you bring your own private subnet, provide the security group which should allow connection within the private subnet "public_subnet_cidr" = "10.0.1.0/24" // If you bring your own VPC, then provide public subnet IP address prefixes. "private_subnet_cidr" = "10.0.16.0/20" // If you bring your own VPC, then provide private subnet IP address prefixes. "resource_group" = "" // bring your own resource_group on Azure "transport_port" = "443" // https port allowed/to be used on public subnet "cloud_ssh_port" = "22" // ssh port allowed/to be used on public subnet "hybrid_mount_paths" = "" // list of mount paths separated by spaces Enable fully synchronized data sync between on-premise and Cloud for set of mount paths hierarchically. This is equivalent to enabling -download-hier, missing-hier and sync-hier in the snapshot option for these directories. Environment variable SPILLBOX_HYBRID_MOUNT_PATHS also does the same. Special keyword "ALL" enables globally on all mount paths. "hybrid_mount_paths_exclude" = "" // list of mount paths separated by spaces Diasble fully synchronized data sync between on-premise and Cloud for set of mount paths hierarchically. This will be applied after hybrid_mount_paths Environment variable SPILLBOX_HYBRID_MOUNT_PATHS_EXCLUDE also does the same. Special keyword "PATH" excludes all paths in $PATH. "spillbox_block_outbound" = "1" // default is 0. Setting this to 1 will block all outbound rules for your project. Q. What are other important commands to know? Ans. Other important commands are: Syntax: splrun project destroy|destroy_all|snapshot|backup|restore destroy -- To destroy compute machine destroy_all -- To destroy compute machine and file cache snapshot -- To create snapshot at current time backup -- To backup cloud data restore -- To restore cloud Once you are done running all your jobs, please run destroy_all to delete all machines and data in the Cloud. If your data upload is large and you want to come back again to use the Cloud in the near future, please run destroy only. This keeps the file server running and the cache intact. This does, however, cost money. If you plan to come back to reuse the Cloud after a long time, please run a backup followed by destroy_all. Backup will archive the file system cache in objectstore which is much cheaper. When you come back, you will have to run restore to retrieve the data from the Cloud. If you make changes on premise after creating a project and accessing data on the Cloud, these changes are not updated in the Cloud. To make your changes visible in the Cloud, you have to run a snapshot. snapshot creates a file instance in the Cloud based on time of the snapshot command. Each time you create changes on prem, you need to update the file system instance time in the Cloud by running the snapshot command. Syntax: splrun show [] In the case that you need to login to a Cloud compute machine, you will need to know information such as the ssh key and the IP address of the machine. These and other important variables can be output with their values in the Cloud using the show command. Syntax: splrun precache ... splrun download splrun download_all [] The requirements of your job may be such that it will sync TBs of data to the Cloud before the job starts running. Although Spillbox will do this automatically for you, you may not want to waste compute resources while data is being uploaded automatically by Spillbox. You may want to help the system by uploading some of the larger data yourself before running the job. splrun upload is a very useful tool for this. You can sync multiple directories at a time. Spillbox does not sync data from the Cloud to the premise automatically, as data download from the Cloud is costly. You will need to use the above download command to sync the selected result back to the premise. Upload/download can exclude/include selected files/directories and detailed usage can be available with command: splrun precache/download help There are few important features for data synchronization between on-premise and remote datacenter which can be activated through snapshot features: splrun project snapshot help| [-missing|-missing-hier|-missing-remove|-missing-hier-remove] [-missing-pattern-include ] [-missing-pattern-exclude ] [-sync|-sync-hier|-sync-remove|-sync-hier-remove] [-download|-download-hier|-download-remove|-download-hier-remove] [-download-pattern-include ] [-download-pattern-exclude ] [-reset-log ] [-reset-appendlog ] [/] / -- Optionally snapshot a file/dir only -missing -- Search for missing files/dirs at current level in on-prem -missing-hier -- Search for missing files/dirs at current and below level in on-prem -missing-remove -- Remove search for missing files/dirs at current level in on-prem -missing-hier-remove -- Remove search for missing files/dirs at current and below level in on-prem -missing-pattern-include -- Pattern to include in handling missing files/dirs -missing-pattern-exclude -- Pattern to include in handling missing files/dirs -sync -- Sync files/dirs at current level with on-prem all the time -sync-hier -- Sync files/dirs at current level and below with on-prem all the time -sync-remove -- Remove -sync -sync-hier-remove -- Remove -sync-hier -download -- Download new files/dirs at current level automatically at close -download-hier -- Download new files/dirs at current and below level automatically at close -download-remove -- Remove download new files/dirs at current level automatically at close -download-hier-remove -- Remove download new files/dirs at current and below level automatically at close -download-pattern-include -- Pattern to include in handling download of files/dirs -download-pattern-exclude -- Pattern to exclude in handling download of files/dirs -reset-log -- Create full snapshot and recreate log file for tracking data transfer -reset-appendlog -- Create full snapshot and recreate log file in append mode for tracking data transfer With the activation of -missing*, -sync* and -download* features in the beginning of the project, Spillbox's cloud bursting can be made to look like two way fully synchronized hybrid cloud. Q. How can I get the license server working? Ans. Spillbox can pick up licenses from the premises. During the "splrun project apply" command, please set SPILLBOX_LICENSE_FILE to the list of port@host separated by ":". This list should be license servers on the premise which you want Cloud compute machines to use. Currently all ports need to be unique. This limitation will be removed later on. If you need to change SPILLBOX_LICENSE_FILE, you need to run "splrun project destroy..". Value of LM_LICENSE_FILE in the Cloud should be the same as on the premise. License will use an outgoing port, default 8888. This port may be blocked by corporations. If so, try another port by specifying the value to "tunnel_port" in the config file. To debug license problem, please set the environment SPILLBOX_DEBUG_TUNNEL to 1 at project apply time to enable logging in file spillbox_license.log.$SPILLBOX_PROJECT. Q. How can I manage jobs in the Cloud? Ans. We plan to support slurm, univa and other job managers. Currently we support only slurm. "splrun show" shows the basic configuration. Any slurm command can be run from premise as "splrun run ". For example, splrun run sinfo. Customers can bring their own machine orchestration and job manager. Q. How can I ssh out to Spilbox worker machines when the ssh port is blocked at corporate level? Ans. You can use Spillbox data pipe to ssh out. After project is created, you need to run the following command to ssh: splrun ssh where is the spillbox machines you want to ssh to. No need to provide ssh private key. It is automatically taken care by splrun. Implementation internally uses a local port, 2222 and above. You can control the port with the environment variable SPILLBOX_SSH_PORT by giving the starting port number. Please use a private IP address, not public, for worker_hostname. Q. How can I pass environment variable to "splrun run" Ans. Please see below usage: splrun run [-env =] help| Q. How can I use Spillbox for test case archival and replication at a later time or at a third party? Ans. If user runs into complex bug and need to send the test case to the third party for debugging by replicating the test case at third party then please follow the below steps: splrun run -archive // archive in dir1 at the end of running Inspect dir1 for sensitive data as suggested by splrun at runtime and ship it to the third party. Third party can restore using Spillbox during project apply phase: splrun project apply -load  // dir2 is directory where shipped // data were untar'ed by the vendor. Then run as splrun run -rerun // if no arguments are given, it runs the command used during archiving Q. How can I get help from Spillbox team? Ans: Please email support@spillbox.io. Q. During project "apply", I am getting "read: connection reset by peer" error. Ans: Your machine, VM or host machine, is killing the connection. Please turn off firewall on host or/and VM on linux: sudo systemctl disable firewalld on Mac: https://www.wikihow.com/Turn-Off-Mac-Firewall Q. "splrun ssh ..." is intermittently failing Ans: Try setting environment variable SPILLBOX_SSH_SERVER_WAIT to a number higher than 5. Q. How do I create a custom tunnel to a server running on-premise? Ans: Please set SPILLBOX_SERVER_TUNNEL before "project apply ...". Example: export SPILLBOX_SERVER_TUNNEL=port1@host1:port2@host2:port3@host3 Q. How do I get UI working in the Cloud? Ans: Start VNC server on any of the worker machines. Please note the machine and vnc number. and on-premise run "splrun project vnctunnel help" and follow the instructions. Example: splrun ssh 10.0.28.37 vncpasswd splrun ssh 10.0.28.37 vncserver splrun project vnctunnel -vnc tigervnc -number 1 10.0.28.37 // for tiger vncserver 10.0.28.37:1 splrun project vnctunnel -vnc rdp -number 0 10.0.28.37 // similar for RDP server at 10.0.28.37 Q. How do I create a custom tunnel to a server running in the Cloud? Ans: Please see "splrun project vnctunnel help". Please provide the value of -vnc "custom" and vncnumber as port number. Example: splrun project vnctunnel -vnc custom -number Q. How to fix "googleapi: Error 400: Use of this bucket name is restricted" Error? Ans: For any bucket name error, please try changing the value to SPILLBOX_PROJECT env variable. Q. How can I change the location of Spillbox config file $HOME/.config/spillbox ? Ans: Please set environment variable SPILLBOX_CONFIGDIR to a location other than $HOME. Q. How can I run multiple splrun in single command line? Ans: Use multi_splrun splrun run ++ splrun run . Using "++" user can concatenate any number of splrun commands to the need. To run multiple splrun commands in background use only brun multi_splrun splrun brun ++ splrun brun & Example : multi_splrun splrun run hostname ++ splrun run whoami multi_splrun splrun brun hostname ++ splrun brun whoami & Q. How can I resolve a connection issue due to a change in your machine's public IP ? Ans: Run the following command to reapply the project configuration splrun project reapply Q. How to Automatically Destroy an Idle Project ? Ans: Set SPILLBOX_IDLE_TIMEOUT to the desired number of hours before creating a project . It specifies how long a project can remain idle before being automatically destroyed. Default value will be 24 hours. Q. How to block all outbount rule of Project ? Ans: Set SPILLBOX_BLOCK_OUTBOUND before project apply or add it to your config file to block all the outbound rules of your project. Example: export SPILLBOX_BLOCK_OUTBOUND=1 "spillbox_block_outbound"="1" // for config file Q. How can I get help from Spillbox team? Ans: Please email support@spillbox.io or call 1-833-SPILLBX(774-5529). ------------------------------------------ Required IAM Roles for Cloud Providers ------------------------------------------ 1. AWS (Amazon Web Services) - IAM roles required on AWS > AmazonEC2FullAccess: Provides full access to Amazon EC2. > AmazonS3FullAccess: Provides full access to Amazon S3." > Specific IAM Roles: To create Instance Profiles and policy. - Specific IAM roles (JSON format): { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ec2:CreateVpc", "ec2:DeleteVpc", "ec2:DescribeVpcs", "ec2:ModifyVpcAttribute", "ec2:DescribeVpcAttribute", "ec2:CreateSubnet", "ec2:DeleteSubnet", "ec2:DescribeSubnets", "ec2:ModifySubnetAttribute", "ec2:GetSubnetCidrReservations", "ec2:AttachInternetGateway", "ec2:DescribeInternetGateways", "ec2:CreateInternetGateway", "ec2:DeleteInternetGateway", "ec2:DetachInternetGateway", "ec2:CreateLocalGatewayRouteTableVpcAssociation", "ec2:CreateRoute", "ec2:DeleteRoute", "ec2:CreateRouteTable", "ec2:DeleteRouteTable", "ec2:DescribeRouteTables", "ec2:AssociateRouteTable", "ec2:DisassociateRouteTable", "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:DescribeSecurityGroupRules", "ec2:ModifySecurityGroupRules", "ec2:DescribeSecurityGroups", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", "ec2:CreateNetworkAcl", "ec2:DeleteNetworkAcl", "ec2:ReplaceNetworkAclAssociation", "ec2:DescribeNetworkAcls", "ec2:CreateNetworkAclEntry", "ec2:DeleteNetworkAclEntry", "ec2:StartInstances", "ec2:RunInstances", "ec2:StopInstances", "ec2:DescribeInstances", "ec2:TerminateInstances", "ec2:DeleteKeyPair", "ec2:CreateKeyPair", "ec2:ImportKeyPair", "ec2:DescribeKeyPairs", "ec2:DescribeImages", "ec2:CreateTags", "ec2:DeleteTags", "ec2:DescribeTags", "ec2:DescribeNetworkInterfaces", "ec2:AttachNetworkInterface", "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:DetachNetworkInterface", "ec2:ModifyNetworkInterfaceAttribute", "ec2:DescribeInstanceTypes", "ec2:DescribeInstanceStatus", "ec2:DescribeInstanceAttribute", "ec2:ModifyInstanceAttribute", "ec2:ResetInstanceAttribute", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:AttachVolume", "ec2:CreateVolume", "ec2:DeleteVolume", "ec2:DetachVolume", "ec2:ModifyVolume", "ec2:ModifyVolumeAttribute", "ec2:DescribeVolumeStatus", "ec2:DescribeVolumesModifications", "ec2:DescribeInstanceCreditSpecifications", "ec2:ModifyInstanceCreditSpecification" ], "Resource": "*" }, { "Sid": "Statement1", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::ts-*" ] } ] } 2. AZURE (Microsoft Azure) - Roles needed for Application account on Azure > Microsoft.Storage: FullAccess > Microsoft.Network: FullAccess > Microsoft.Compute: FullAccess > Microsoft.Resources/subscriptions/resourceGroups: FullAccess > Microsoft.Authorization: roleAssignment - Specific IAM roles (JSON format): "permissions": [ { "actions": [ "Microsoft.Storage/*", "Microsoft.Network/*", "Microsoft.Compute/*", "Microsoft.Resources/subscriptions/resourceGroups/*", "Microsoft.Authorization/roleAssignments/read", "Microsoft.Authorization/roleAssignments/write", "Microsoft.Authorization/roleAssignments/delete", ], "notActions": [], "dataActions": [], "notDataActions": [] } ] 3. GCP (Google Cloud Platform) - IAM roles and permissions required for service account on GCP compute.disks.create compute.disks.delete compute.disks.get compute.disks.list compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.firewalls.update compute.globalOperations.get compute.images.create compute.images.delete compute.images.get compute.images.list compute.images.update compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setMetadata compute.instances.start compute.instances.stop compute.instances.update compute.networks.create compute.networks.delete compute.networks.get compute.networks.list compute.networks.update compute.networks.updatePolicy compute.projects.get compute.projects.setCommonInstanceMetadata compute.routers.create compute.routers.delete compute.routers.get compute.routers.list compute.routers.update compute.subnetworks.create compute.subnetworks.delete compute.subnetworks.get compute.subnetworks.list compute.subnetworks.update compute.subnetworks.use compute.subnetworks.useExternalIp compute.zones.get compute.zones.list iam.serviceAccounts.actAs storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.buckets.setIamPolicy storage.buckets.update storage.objects.create storage.objects.delete storage.objects.get storage.objects.list storage.objects.update