Data Warehouse

Tutorial Videos

API

Storage and Compute

Data Sources

CDC Setup

Transform

KPI

Models
Segments

Dashboard

Drill Down

Explores

Machine Learning

Sharing

Scheduling

Notifications

View Activity

Admin

Launch On Cloud

FAQs

FAQ's

Security

Feedback

Option to take feedback from UI

Release Notes

Release Notes

RestAPI

Models

Segments

RestAPI is a tool used to access multiple URLs at a time. Information from one specific URL can be obtained with RestAPI.

Sprinkle now supports RestAPI as a data source. On clicking the "+sign" a list of data sources appears. In this case, RestAPI is selected. A new RestAPI data source is named and created.

After naming the data source, user can choose either existing connection from the dropdown or can create a new connection.

In the datasets tab, users need to provide a table name and a Web URL with parameters. User can then select either the GET request method or POST request method.

If user select the GET Request Method, then user can optionally provide Headers which should be in JSON and the Data Root.

For Data Root, give the json path from which data should be extracted. ex - {key1:[{x:y},{x:z}], key2:value2,offset:1} if data_root is key1, then {x:y},{x:z} will be stored in two different rows in the warehouse table. Otherwise whole json will be flattened and stored in single row. For complex type give keys with dot(.) separated. ex - {key1:[{x:y},{x:z}], key2:{key2_key1:[{x:y},{x,z}]},offset:1} for key2.key2_key1 it will give {x:y} and {x,z} in two separate rows.

If user select the POST Request Method, then user can optionally select Body (currently supporting Raw Data) and acc. to the data user can provide Content-Type value from the dropdown and Raw Data along with Headers which should be in JSON and the Data Root.

For Data Root, give the json path from which data should be extracted. ex - {key1:[{x:y},{x:z}], key2:value2,offset:1} if data_root is key1, then {x:y},{x:z} will be stored in two different rows in the warehouse table. Otherwise whole json will be flattened and stored in single row. For complex type give keys with dot(.) separated. ex - {key1:[{x:y},{x:z}], key2:{key2_key1:[{x:y},{x,z}]},offset:1} for key2.key2_key1 it will give {x:y} and {x,z} in two separate rows.

In the Ingestion Jobs tab, the concurrency (number of tables that can run in parallel, a maximum of 7) can be set preferentially before running the job. The status of the job will be updated in the tab below once it’s complete. The jobs can also be set to run automatically by enabling autorun. Frequency can be changed by clicking on More --> Autorun-->Change Frequency.

import requests
from requests.auth import HTTPBasicAuth

auth =  HTTPBasicAuth(<API_KEY>, <API_SECRET>)
response = requests.get("https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE_ID>", auth)

print(response.content)

library('httr')

username = '<API KEY>'
password = '<API SECRET>'

temp = GET("https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE ID>",
           authenticate(username,password, type = "basic"))

temp = content(temp, 'text')
temp = textConnection(temp)
temp = read.csv(temp)

/*Download the Data*/

filename resp temp;
proc http
url="https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE ID>"
   method= "GET"  
   WEBUSERNAME = "<API KEY>"
   WEBPASSWORD = "<API SECRET>"
   out=resp;
run;

/*Import the data in to csv dataset*/
proc import
   file=resp
   out=csvresp
   dbms=csv;
run;

/*Print the data */
PROC PRINT DATA=csvresp;
RUN;

import requests
import json

url='http://hostname/api/v0.4/createCSV'

username='API_KEY'
password='API_SECRET'

files={'file':open('FILE_PATH.csv','rb')}
values={'projectname':PROJECT_NAME','name':'CSV_DATASOURCE_NAME'}

r=requests.post(url, files=files, data=values, auth=(username,password))

res_json=json.loads(r.text)

print(res_json['success'])

import requests
import json

url='http://hostname/api/v0.4/updateCSV'

username='API_KEY'
password='API_SECRET'

files={'file':open('FILE_PATH.csv','rb')}
values={'projectname':PROJECT_NAME','name':'CSV_DATASOURCE_NAME'}

r=requests.post(url, files=files, data=values, auth=(username,password))

res_json=json.loads(r.text)

print(res_json['success'])

import requests

url='https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE ID>'

username='API_KEY'
password='API_SECRET'

r=requests.get(url,auth=(username,password))
print(r)
print(r.text)

import requests
import pandas as pd

url='https://<hostname>/api/v0.4/explores/infoByFolder/<SPACE_ID>'

username='API_KEY'
password='API_SECRET'

r=requests.get(url,auth=(username,password)).json()
df = pd.DataFrame(r)
print(df)

import requests
import pandas as pd

url='https://<hostname>/api/v0.4/folders/byOrgName/<ORG_NAME>'

username='API_KEY'
password='API_SECRET'

r=requests.get(url,auth=(username,password)).json()
df = pd.DataFrame(r)
print(df.loc[:,['name','id']])

import requests
import pandas as pd

url='https://<host>/api/v0.4/explore/sql/<EXPLORE_ID>/<PROJECT_NAME>'

username='API_KEY'
password='API_SECRET'

r=requests.get(url,auth=(username,password)).json()
print(r.text)

import requests

import pandas as pd

import io

url='https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE ID>'

secret='API_SECRET'

r=requests.get(url,headers = {'Authorization': 'SprinkleUserKeys ' +secret } )

df = pd.read_csv(io.StringIO(r.text),sep=',')

import requests

import pandas as pd

import io

url='https://<hostname>/api/v0.4/segment/streamresult/<SEGMENT ID>'

secret='API_SECRET'

r=requests.get(url,headers = {'Authorization': 'SprinkleUserKeys ' +secret } )

df = pd.read_csv(io.StringIO(r.text),sep=',')

import requests

import json

url='http://hostname/api/v.o4/createCSV'

files={'file':open('path/file.csv’')}

values={'projectname':PROJECT_NAME,'name':'csv_datasource_name/table_name'}

secret='API_SECRET'

r=requests.post(url, files=files, data=values, headers = {'Authorization': 'SprinkleUserKeys ' +secret } )

res_json=json.loads(r.text)

import requests

import json

url='http://hostname/api/v.o4/updateCSV'

files={'file':open('path/file.csv’')}

values={'projectname':PROJECT_NAME,'name':'csv_datasource_name/table_name'}

secret='API_SECRET'

r=requests.post(url, files=files, data=values,headers = {'Authorization': 'SprinkleUserKeys ' +secret } )

res_json=json.loads(r.text)

import requests

import pandas as pd

import io

url='https://<host>/api/v0.4/explore/sql/<EXPLORE_ID>/<PROJECT_NAME>'

secret='API_SECRET'

r=requests.get(url,headers = {'Authorization': 'SprinkleUserKeys ' +secret } )

df = pd.read_csv(io.StringIO(r.text),sep=',')