v.0.15

2023.02.07

This version has been deprecated. If used with the current API version it can produce unexpected behaviour or errors.

The package shimoku-api-python is no longer maintained

To get the new version 🚀

pip install --upgrade shimoku-api-python

Introducing verbosity and async execution, this latest version has been designed to enhance the developer experience by providing greater visibility into what's happening, and significantly reducing the time of execution of dashboards by a factor of 10!

Fixes

  • An issue with method visibility from the modules has been solved, now when your IDE of choice inspects each module of the SDK it will only show the methods available to you.

  • Pagination has been implemented for low level queries, which means that there is now no limit on the number of apps or reports that can be retrieved at one time using the SDK.

Improvements

  • The tabs update function has more options! Now tabs can stick to the top of the screen when scrolling by setting the parameter sticky to True. They also have two variants, one where they are enclosed:

    and one where they are separated:

    By default they will be shown in the former. To activate the second visualization the parametter just_labels has to be set to True.

    The following code exemplifies the use of the new tabs features:

    list_of_tabs = []
    for i in range(10):
        s.plt.gauge_indicator(
            menu_path='Lorem ispum',
            order=0,
            value=random.randint(0, 100),
            title='Lorem ispum',
            description='Lorem ispum',
            tabs_index=('Gauge Indicators', f'Lorem ispum {i}'),
        )
        list_of_tabs.append(f'Lorem ispum {i}')
    
    s.plt.change_tabs_group_internal_order(
        menu_path='Lorem ispum',
        group_name='Gauge Indicators',
        tabs_list=list_of_tabs,
    )
    
    list_of_tabs = []
    for i in range(10):
        if i == 0:
            for j in range(10):
                s.plt.gauge_indicator(
                    menu_path='Lorem ispum',
                    order=j*2+2,
                    value=random.randint(0, 100),
                    title='Lorem ispum',
                    description='Lorem ispum',
                    tabs_index=('Gauge Indicators', f'Lorem ispum {i}'),
                )
    
        s.plt.gauge_indicator(
            menu_path='Lorem ispum',
            order=0,
            value=random.randint(0, 100),
            title='Lorem ispum',
            description='Lorem ispum',
            tabs_index=('Gauge Indicators Head', f'Lorem ispum {i}'),
        )
        list_of_tabs.append(f'Lorem ispum {i}')
    
    s.plt.change_tabs_group_internal_order(
        menu_path='Lorem ispum',
        group_name='Gauge Indicators Head',
        tabs_list=list_of_tabs,
    )
    s.plt.insert_tabs_group_in_tab(
        menu_path='Lorem ispum',
        parent_tab_index=('Gauge Indicators Head', 'Lorem ispum 0'),
        child_tabs_group='Gauge Indicators',
    )
    s.plt.update_tabs_group_metadata(
        menu_path='Lorem ispum',
        group_name='Gauge Indicators',
        cols_size=6,
        sticky=False,
        just_labels=True,
        rows_size=14,
    )
    
    s.plt.update_tabs_group_metadata(
        menu_path='Lorem ispum',
        group_name='Gauge Indicators Head',
        sticky=True,
        just_labels=False,
        rows_size=0,
    )

    The result is:

  • We have made the following charts more capable! :

    bar, scatter_with_confidence_area, horizontal_barchart, zero_centered_barchart,
    line, scatter, heatmap

    Now these charts will aggregate duplicate data if there is, and they use a custom function, that has to be passed as an argument, where the user can define how the aggregation will happen.

    The parameters name is aggregation_func, it expects one of the following:

    • A single aggregation function (e.g. np.mean, np.sum)

    • A list of aggregation functions

    • A dictionary mapping a column name to an aggregation function or a list of aggregation functions.

    If only one function is defined, it will be applied to all columns. If a list is provided, all functions will be applied to all columns. If a dictionary is passed, each column will be aggregated by the function or functions defined for it.

    Using this data:

    data = [
        {'date': dt.date(2021, 1, 1), 'Restaurant rating': 1, 'food rating': 10,
         'Location': "Barcelona", 'Fav Food': "pizza", 'Fav Drink': "water"},
        {'date': dt.date(2021, 1, 2), 'Restaurant rating': 2, 'food rating': 8,
         'Location': "Barcelona", 'Fav Food': "sushi", 'Fav Drink': "fanta"},
        {'date': dt.date(2021, 1, 3), 'Restaurant rating': 3, 'food rating': 10,
         'Location': "Madrid", 'Fav Food': "pasta", 'Fav Drink': "wine"},
        {'date': dt.date(2021, 1, 4), 'Restaurant rating': 4, 'food rating': 5,
         'Location': "Madrid", 'Fav Food': "pizza", 'Fav Drink': "wine"},
        {'date': dt.date(2021, 1, 5), 'Restaurant rating': 5, 'food rating': 7,
         'Location': "Madrid", 'Fav Food': "sushi", 'Fav Drink': "water"},
    
        {'date': dt.date(2021, 1, 1), 'Restaurant rating': 5, 'food rating': 6,
         'Location': "Andorra", 'Fav Food': "pizza", 'Fav Drink': "water"},
        {'date': dt.date(2021, 1, 2), 'Restaurant rating': 4, 'food rating': 0,
         'Location': "Paris", 'Fav Food': "sushi", 'Fav Drink': "fanta"},
        {'date': dt.date(2021, 1, 3), 'Restaurant rating': 3, 'food rating': 5,
         'Location': "Paris", 'Fav Food': "pasta", 'Fav Drink': "wine"},
        {'date': dt.date(2021, 1, 4), 'Restaurant rating': 2, 'food rating': 9,
         'Location': "Andorra", 'Fav Food': "pizza", 'Fav Drink': "wine"},
        {'date': dt.date(2021, 1, 5), 'Restaurant rating': 1, 'food rating': 8,
         'Location': "Andorra", 'Fav Food': "sushi", 'Fav Drink': "water"},
    ]

    If you pass aggregation_func=[np.mean, np.sum, np.amax, np.count_nonzero, np.amin] in the following code:

    s.plt.bar(
        data=data,
        x='date', y=['Restaurant rating', 'food rating'],
        menu_path=menu_path, order=0,
        aggregation_func=[np.mean, np.sum, np.amax, np.count_nonzero, np.amin]
    )

    The result is:

    If instead we execute with a dictionary:

    s.plt.bar(
        data=data,
        x='date', y=['Restaurant rating', 'food rating'],
        menu_path=menu_path, order=0,
        aggregation_func={'food rating': [np.count_nonzero, np.amin],
                          'Restaurant rating': [np.mean, np.sum, np.amax]},
    )
    
    s.plt.bar(
        data=data,
        x='date', y=['Restaurant rating', 'food rating'],
        menu_path=menu_path, order=1,
        aggregation_func={'food rating': [np.count_nonzero, np.amin]}
    )

    The resulting plots are:

    If duplicate data is found, the SDK will notify you and you will need to provide an aggregation function as there is no default aggregation function.

  • Now the charts with the option to add a filter have been upgraded so that they can show the acumulation of all filters! This is useful to contrast wether a filtering field has a bigger effect than another or if you want to visualize he whole data. To trigger it you just need to add 'get_all' to the filters dictionary.

    In the next example we can see that the aggregation_function will be used for the repeated data after grouping by a filtering field:

    filters = {'order': 0,
               'filter_cols': ["Location", "Fav Food", 'Fav Drink'],
               'get_all': True,
               }
    
    s.plt.bar(
        data=data,
        x='date', y=['Restaurant rating', 'food rating'],
        menu_path='Multifilter all options',
        order=1,
        filters=filters,
        aggregation_func=np.mean,
    )

    the result from executing the example is:

    If the value of 'get_all' is a list of the column names, instead of True, then only the specified columns will have the option of aggregating all values, as shown in the following example:

    filters = {'order': 4,
               'filter_cols': ["Location", "Fav Food", 'Fav Drink'],
               'get_all': ["Location", "Fav Drink"],
               }
               
    s.plt.bar(
        data=data,
        x='date', y=['Restaurant rating', 'food rating'],
        menu_path=menu_path,
        order=3,
        rows_size=2, cols_size=9,
        filters=filters,
        aggregation_func={"food rating": [np.sum, np.mean],
                          "Restaurant rating": [np.mean, np.amax, np.amin]}
    )

    The example's result is:

  • There is now the option to monitor the SDK flow of execution, with three levels of verbosity. This will help to know where the error occurred, so it will make bugfixing a lot easier, It also outputs how much time the function call has taken to quickly profile code. To enable it you just have to set the parameter verbosity from the client to 'INFO' or 'DEBUG'.

    s = Shimoku.Client(
        access_token=access_token,
        universe_id=universe_id,
        environment=environment,
        business_id=business_id,
        verbosity='INFO',
    )

    The 'INFO' keyword will be the most useful for visualizing the execution while 'DEBUG' is made so it outputs as much information as possible. You can also set it to 'WARNING' but this is the default behaviour and will have no effect, it will output only warnings and errors.

    The logging level of the Shimoku SDK can be configured dynamically during execution by calling the configure_logging function with the desired verbosity level (either 'DEBUG', 'INFO', or 'WARNING') and an optional channel to write the log output to. This allows for fine-grained control over the logging behavior and output, making it easier to debug and profile the SDK's execution.

  • This version of the SDK comes with a major productivity boost! Asynchronous execution is now supported, which means that code execution doesn't need to stop for requests, freeing up time to make more requests. To enable it, simply set the async_execution parameter to True when creating the client object:

    s = Shimoku.Client(
        access_token=access_token,
        universe_id=universe_id,
        environment=environment,
        business_id=business_id,
        verbosity='INFO',
        async_execution=True,
    )

    By default, execution is set to sequential. You can toggle between sequential and asynchronous execution using the following functions:

    s.activate_async_execution()
    s.activate_sequential_execution()

    When asynchronous execution is enabled, tasks are added to a task pool and executed once a strictly sequential task is reached. A function has been added to allow users to trigger the execution of tasks, which is s.run().

    Be sure to call s.run() at the end of your code to ensure all tasks are executed before the program terminates.

Last updated