Support

Lorem ipsum dolor sit amet:

24h / 365days

We offer support for our customers

Mon - Fri 8:00am - 5:00pm (GMT +1)

Get in touch

Cybersteel Inc.
376-293 City Road, Suite 600
San Francisco, CA 94102

Have any questions?
+44 1234 567 890

Drop us a line
info@yourdomain.com

About us

Lorem ipsum dolor sit amet, consectetuer adipiscing elit.

Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec.

Chapter 19. Terminal Commands

19. Terminal Commands

In the following we will describe all terminal commands currently supported.

19.1. birt

Allows to control the BIRT engine. For the execution of reports, BIRT requires a running Eclipse environment. It will start when the BIRT report will be executed for the first time and then continues to run. By entering the "birt" command, the runtime environment for BIRT reports will stop.

Use: birt shutdown

19.2. cat

Allows to display the specified file on the terminal window.

Use: cat file

19.3. cd

Allows to change the current directory.

Use: cd directory

19.4. clearInternalDbCache

Clears the cache of the internal database. This is used to optimise the performance for CSV and script datasources (refer to ,,datasources'').

Use: clearInternalDbCache

19.5. clearInternalScriptCache

Clears the cache of the internal scripting cache, e.g. compiled scripts.

Use: clearInternalScriptCache

19.6. columnsExist

Checks if a given column list exists in a given table.

Here, datasource is an object resolver query that returns exactly one datasource. Refer to Section 15.5. Object Resolver for more information on object resolver queries.

The following example checks if the ''CUS_CITY'' column exists in the ''T_AGG_CUSTOMER'' table of the datasource with id 123.

columnsExist id:DatasourceDefinition:123 T_AGG_CUSTOMER CUS_CITY

The following example checks if the ''CUS_CITY'' and ''myColumn'' columns exists in the ''T_AGG_CUSTOMER'' table of the datasource with id 123.

columnsExist id:DatasourceDefinition:123 T_AGG_CUSTOMER CUS_CITY myColumn

Use: columnsExist datasource table columns

19.7. columnsMetadata

Allows to fetch column metadata of a given table.

Here, datasource is an object resolver query that returns exactly one datasource. Refer to Section 15.5. Object Resolver for more information on object resolver queries.

The following example prints the metadata of the ''T_AGG_CUSTOMER'' table in the Datasource with id 123:

columnsMetadata id:DatasourceDefinition:123 T_AGG_CUSTOMER

The default metadata printed is the following:

  • COLUMN_NAME
  • TYPE_NAME
  • COLUMN_SIZE
  • DECIMAL_DIGITS
  • ORDINAL_POSITION
  • IS_NULLABLE
  • IS_AUTOINCREMENT

 

Metadata documentation of the columns above can be found here: https://docs.oracle.com/en/java/javase/11/docs/api/java.sql/java/sql/DatabaseMetaData.html#getColumns(java.lang.String,java.lang.String,java.lang.String,java.lang.String).

You may append, additionally to the default columns listed above, any number of the available metadata columns by passing them as arguments. E.g. you may choose to append TABLE_SCHEM and CHAR_OCTET_LENGTH as such:

Note that you can use the >>> operator for sending the command results to a given datasink. This may be useful for long command outputs for better result analysis. You can also use > for new file creation or >> for file append. Details of all terminal operators can be found in Chapter 18. Terminal Operators.

columnsMetadata id:DatasourceDefinition:123 T_AGG_CUSTOMER TABLE_SCHEM CHAR_OCTET_LENGTH

Use: columnsMetadata datasource table [column] [column...]

19.8. config

Configuration files are cached in order to optimize the performance. When a configuration file is changed, the cache must be emptied for loading the changes into the cache and thus activating the changes made.

Use: config reload

In order to read the current active value of a configuration parameter, you can use "config echo", e.g. for reading the default charset in the main.cf configuration file:

config echo main/main.cf default.charset

would return you e.g. "UTF-8".

For reading an attribute in the form:

<mailaction html="false">

you can write: config echo scheduler/scheduler.cf scheduler.mailaction[@html].

More details on the syntax can be found in the Apache Commons Configuration documentation: https://commons.apache.org/proper/commons-configuration/userguide/quick_start.html

19.9. connPoolStats

Prints connection pool statistics. For details on all parameters and configuration check https://www.mchange.com/projects/c3p0/ and the Connection Pool's Section on the Configuration Guide.

This command prints the following information:

Datasource Datasource name and id.
Max pool size Maximum number of connections a pool will maintain at any given time.
Number of connections Current total number of connections in the pool (both busy and idle).
Busy connections Number of busy connections in the pool. These connections are already checked out from the pool.
Idle connections Idle of busy connections in the pool. These connections can be checked out from the pool in order to be used.
Threads awaiting connection checkout Number of threads currently waiting for a connection from the connection pool.
Unclosed orphaned connections Number of checked out connections from the pool but not longer being managed by the connection pool.

In order to monitor the connection pool usage, two important parameters are ''numBusyConnections'' and ''numThreadsAwaitingCheckoutDefaultUser''. If the ''numBusyConnections'' reaches the ''maxPoolSize'', this means that all the connections in the connection pool have exhausted and you will see ''numThreadsAwaitingCheckoutDefaultUser'' increasing. This means that the number of connections in the connection pool is not enough for the current load.

19.10. copy

The superordinate copy command includes the following commands to copy ReportServer objects and entities.

19.10.1. copy parameterDefinitions

Copies all parameter definitions from an origin report to a target report. If the reports are variants, their parent base reports are used. Note that dependencies on other parameters are not copied, so these have to be copied manually.

In order to select the reports, you can use object resolver queries. Refer to Section 15.5. Object Resolver for more details on this. Note that the queries must each resolve to exactly one report.

If the replaceExistingParameters is true, the command replaces parameter definitions in the target report having the same key as in the origin report. If false, it ignores these.

Note that you can also copy parameter definitions with the ''copy'' and ''paste'' context menu items of the parameter management panel.

The following example copies all parameter definitions of the origin report with ID 123 into the target report with ID 456. It replaces parameter definitions in the target report having the same key as in the origin report.

copy parameterDefinitions id:Report:123 id:Report:456 true
Use: copy parameterDefinitions origin target replaceExistingParameters

19.11. cp

Enables to copy one or more files to a new folder. The "-r" flag marks the process as being recursive. In this case sub-folders will also be copied.

Use: cp [-r] sourcefiles targetfolder

Note that report variants can also be copied into another report. By using wildcards (e. g. prefix*) you can copy several objects.

19.12. createTextFile

Creates a new text file in the fileserver file system and opens a window for editing the new file.

Use: createTextFile file

19.13. datasourceMetadata

Allows to dynamically call any method from the DatabaseMetaData interface found here: https://docs.oracle.com/en/java/javase/11/docs/api/java.sql/java/sql/DatabaseMetaData.html.

The call fails if argument count and name does not match exactly one method of said interface. The call fails as well if the args cannot be converted into the needed parameter types. null may be passed as a String in the args if necessary: it will be evaluated to a null object. As you will see it is a very powerful and versatile tool at your disposal.

All results will be displayed as a table if their return type is a ResultSet.

Examples:

datasourceMetadata id:DatasourceDefinition:123 getDriverMajorVersion

datasourceMetadata id:DatasourceDefinition:123 getDriverName

datasourceMetadata id:DatasourceDefinition:123 getDatabaseMajorVersion

datasourceMetadata id:DatasourceDefinition:123 getColumns null null T_AGG_CUSTOMER null

In the last example we call the following method https://docs.oracle.com/en/java/javase/11/docs/api/java.sql/java/sql/DatabaseMetaData.html#getColumns(java.lang.String,java.lang.String,java.lang.String,java.lang.String) and choose to supply the table name to identify which column's metadata will be fetched.

So we pass the method name: getColumns and four parameters: null null T_AGG_CUSTOMER null. As mentioned above, null evaluates to the null object which means we call the method getColumns(null, null, ''T_AGG_CUSTOMER'', null).

Note that you can use the >>> operator for sending the command results to a given datasink. This may be useful for long command outputs for better result analysis. You can also use > for new file creation or >> for file append. Details of all terminal operators can be found in Chapter 18. Terminal Operators.

Use: datasourceMetadata datasource methodName [arg] [arg ...]

19.14. deployReport

Allows to analyze a deployment attempt of a given report (left report) into an destination report (right report). Both reports have to exist already in ReportServer.

analyze Create and download a document containing deployment analysis. This analysis lists conflicts (including context) that would occur during a deployment attempt of the left report into the right report.

Note that if an entry does not cause a conflict, e.g. if the corresponding column is not used in any variant, the entry is not listed in the analysis result.

The -i option can be used to ignore case-sensitivity of field names.

In order to select the reports, you can use object resolver queries. Refer to Section 15.5. Object Resolver for more details on this. Note that the queries must resolve to exactly one basic report.

Example:

deployReport analyze id:Report:123 id:Report:456

 

A PDF containing the analysis of deploying Report with id 123 (left report) into Report with id 456 (right report) is created and downloaded automatically.

The current sections in the analysis are:

  • Columns contained in left report but not in right report
  • Columns contained in both reports but which different definitions
  • Variants of right report using columns not available in left report
  • Variants of right report using columns with different definitions as in left report

Use: analyze [-i] leftReport rightReport

19.15. desc

Allows the output of object definitions as they are internally used by ReportServer. For instance, by using desc Report you will get a list of all fields which will be saved for the report object of ReportServer. The desc function is primarily designed for developers who want to enhance ReportServer via scripts. For further information refer to the Script/Developer manual. In addition, you can display the object data saved. You wish to display the fields saved in the database for a Jasper Report, then enter the command desc JasperReport myReport. Here myReport stands for the name of your report. By setting the -w flag you control the output either directly in the console or in a new window.

Tip: The command ls -l displays the entity name of an object.
 

Use: desc [-w] EntityName [Entity]

19.16. diffconfigfiles

Default configuration files are created on first run of ReportServer. Later, when upgrading ReportServer to a newer version, it is probable that newly added configuration files will be missing (i.e. all configuration files added between the version originally installed and the version upgraded to). This command helps you to find out which configuration files are missing without having to search all release notes between these versions. Default config files can also be created with help of this command.

showmissing allows you to compare the current set of configuration files with the expected set of configuration files of the currently installed version. Then it lists all missing files.
createmissing allows you to create the default missing configuration files found with ''showmissing'' into the appropriate location inside your /fileserver/etc path.
createall copies all default missing configuration files into a given folder. This allows you to compare configuration file contents/fields, etc.

Use: diffconfigfiles (showmissing | createmissing | createall folder)

19.17. dirmod

Enables to modify fileserver directories. The following subcommand is available:

  • webaccess - Modify the web access property of a given fileserver directory.

The syntax for modifying a directory's web access is:

Use: dirmod webaccess directory access

Here, directory is an object resolver query that returns one or more FileServerFolders. Refer to Section 15.5. Object Resolver for more information on object resolver queries. access may be true or false.

Usage examples:

Removes web access from fileserver/resources/ directory:

dirmod webaccess fileserver/resources/ false

Removes web access from directory with id 123:

dirmod webaccess id:FileServerFolder:123 false

Adds web access to all filesystem directories containing ''report'' in their name:

dirmod webaccess "hql:from FileServerFolder where name like '

19.18. echo

Outputs the given text on the console.

Use: echo hello world

19.19. editTextFile

Opens an existing text file in the fileserver file system for editing.

Use: editTextFile file

19.20. eliza

You want to communicate with Eliza? Enter eliza and then hello. You will terminate the communication by entering CTRL+C or bye.

Use: eliza

19.21. env

Prints environment information of the current installation including relevant environment variables.

Use: env

19.22. exec

Allows to execute scripts from the fileserver's file system.

By entering the -c flag (for commit) you control whether changes to the script persist in the database.

With the -d (for silence) you can suppress all the script's output.

The -t flag allows you to output the complete stacktrace in case of an exception.

By entering the -n (for non-monitored mode) flag you prevent the script to run in an own monitored thread. With other words, it runs the script in the server thread instead of in its own thread.

Finally, the -w flag displays the script output in a new window.

For further information on scripts refer to the Script Guide.

Use: exec [-c] [-s] [-t] [-n] [-w] script

19.23. export all

Allows to export all metadata to a file. This file can be then imported by the command "import all". Usually you should directly zip and save the export data in the File System to save memory, as it will be filed in an XML dialect.

For exporting and zipping the export file, use export all | zip > myExportFile.zip. You will get a zipped ''data'' file, which can be renamed to ''data.xml'' before importing again.

If you don't need to zip the file, you can use export all > myExportFile.xml.

Note that the export file has to be created inside the internal file system: cd fileserver and the ''import all'' command needs an unzipped XML file.

Use: export all > myExportFile.xml

19.24. groupmod

Enables to change groups. The following subcommands are available:

addmembers Add or remove members to a group. These can be users, OUs, or another groups.

The syntax for adding/removing members to/from a group is

groupmod addmembers [-c] group [members] [members...]

Here, group is an object resolver query that returns exactly one group. Refer to Section 15.5. Object Resolver for more information on object resolver queries. If the optional parameter -c is given, the given members are removed instead of added. If no members are given, all members are being deleted from the group member list. Finally, the members list parameters refer to one or more object resolver queries that return a user, a group, or an OU. Usage examples:

Deletes all members from the group with id 123:

groupmod addmembers -c id:Group:123

Deletes one user from the group with id 123:

groupmod addmembers -c id:Group:123 id:User:456

Adds three members (one user, one group, one OU) to the group with id 123:

groupmod addmembers id:Group:123 id:User:456 "hql:from Group where id=789" id:OrganisationalUnit:987

19.25. haspermission

Allows to check if a given user has a given permission on a given target. Returns true if the user has the permission, else false.

The -g flag allows to check generic permissions. Documentation of these including the exact target types you can enter can be found in Section 3.2. Permission Management.

For other objects, e.g. Users, Datasources, etc., you can check the entity types here: https://reportserver.net/api/latest/entities.html.

All objects can be fetched using object resolver queries. Refer to Section 15.5. Object Resolver for more details of object resolver queries.

Valid permissions are:

  • Read
  • Write
  • Execute
  • Delete
  • GrantAccess
  • TeamSpaceAdministrator

 

The following example checks if the user with id 123 has Execute permission on the AccessRsSecurityTarget generic target, i.e. if the user is allowed to log in into ReportServer.

reportserver$ haspermission -g id:User:123 net.datenwerke.rs.core.service.genrights.access.AccessRsSecurityTarget Execute
reportserver$ true

The following example checks if the user with id 123 has Read permission on the datasource with id 456.

reportserver$ haspermission id:User:123 id:DatasourceDefinition:456 Read
reportserver$ false

Use: haspermission [-g] user target right

19.26. hello

Says hello

Use: hello

19.27. hql

Executes HQL (Hibernate Query Language) database queries and displays the results. HQL is used to write database-independent queries. More information here: https://docs.jboss.org/hibernate/orm/5.6/userguide/html_single/Hibernate_User_Guide.html#hql and here: https://docs.jboss.org/hibernate/core/3.3/reference/en/html/queryhql.html.

Note you can find all entities in ReportServer here: https://reportserver.net/api/latest/entities.html and all javadocs here: https://reportserver.net/api/latest/javadoc/index.html.

The results can be displayed in a new window with the -w flag.

Example uses are shown next.

List all reports:

reportserver$ hql "from Report"

List all users:

reportserver$ hql "from User"

List all dynamic lists with name like 'MyReport':

reportserver$ hql "from TableReport t where t.name like '

List all report properties from report with id 123:

reportserver$ hql "select r.reportProperties from Report r where r.id = 123"

Use: hql [-w] query

19.28. id

Allows to print information of a given username.

This includes group information and organizational unit information of a user.

Group information includes direct and indirect groups (via another groups or via organizational units).

Use: id username

19.29. info

Displays information about ReportServer objects. The following subcommands are available:

19.29.1. info datasource

Displays general information of a given datasource. For relational databases, displays additional metadata information.

You can use an object resolver query to locate the specific datasource. Refer to Section 15.5. Object Resolver for more details of object resolver queries.

Example for displaying information of the datasource with id 123:

info datasource id:DatasourceDefinition:123

Use: info datasource datasource

19.30. import all

Allows to import an export file which was created with "export all".

Use: import all export.xml

19.31. kill

Allows to terminate ongoing script executions. Refer also to ps. By entering the -f flag you can terminate the script execution thread. Keep in mind that this will be effected by Thread. stop which might provoke errors in ReportServer. For a discussion of the use of Thread.stop() please refer to http://docs.oracle.com/javase/6/docs/technotes/guides/concurrency/threadPrimitiveDeprecation.html.

Use: kill [-f] id

19.32. ldapfilter

Allows you to analyze the installed LDAP filter in the sso/ldap.cf configuration file. The LDAP filter is parsed and shown it in a multi-line form that makes it easier to understand its hierarchy and embedded components. The command also tries to simplify the LDAP filter in certain ways (for example, by removing unnecessary levels of hierarchy, like an AND embedded in another AND).

The optional i (indentation) flag is used to indicate the number of spaces for indentation. Default is 2. The optional n (no-simplification) flag indicates that no simplification should be done, i.e., the filter should not be futher analyzed. Note that you have to reload your configuration changes with config reload or restart your ReportServer when you change your filter in the sso/ldap.cf configuration file.

As the output of this terminal command is usually long, you can use the >>> operator for sending the output to a given datasink: ldapfilter >>> id:DatasinkDefinition:123.

Use: ldapfilter [-i] [-n]

19.33. ldapguid

Makes a best-effort guess of the appropriate GUID needed for your specific LDAP server. The GUID is needed in the in the LDAP configuration file: sso/ldap.cf.

Use: ldapguid

19.34. ldapimport

Imports LDAP users, groups and organizational units as configured in sso/ldap.cf. Configuration options are described in the Configuration Guide.

For scheduling the functionality periodically, you can use the script available here: https://github.com/infofabrik/reportserver-samples/blob/main/src/net/datenwerke/rs/samples/admin/ldap/ldapimport.groovy and schedule it via ''scheduleScript''.

Use: ldapimport

19.35. ldapinfo

Displays some information about your installed LDAP server in the LDAP configuration file: sso/ldap.cf.

Use: ldapinfo

19.36. ldapschema

The superordinate ldapschema command includes the following commands which allow you to browse and analyze your LDAP schema. This may be useful for finding out the values needed in the LDAP configuration file: sso/ldap.cf.

Note that you can use the >>> operator for sending the command results to a given datasink. This may be useful for long command outputs for better result analysis. You can also use > for new file creation or >> for file append. Details of all terminal operators can be found in Chapter 18. Terminal Operators. For example, in order to send a list of all your LDAP attributes to a datasink with ID 123, you can enter the following command: ldapschema attributeList >>> id:DatasinkDefinition:123

19.36.1. ldapschema attributeInfo

Displays schema information of a given attribute. This information includes the attribute's OID, all its names, description, superior and sub-attributes, syntax, matching rules, usage, among others. You can use the ldapschema attributeList for listing all available attributes, which you can further analyze with ldapschema attributeInfo.

Use: ldapschema attributeInfo attribute

19.36.2. ldapschema attributeList

Displays a list of all attributes found in your LDAP server.

Use: ldapschema attributeList

19.36.3. ldapschema entry

Displays a text representation of the complete LDAP schema entry. As this output is usually a long output, you can use the >>> operator for sending the output to a given datasink as noted above: ldapschema entry >>> id:DatasinkDefinition:123.

Use: ldapschema entry

19.36.4. ldapschema matchingRuleInfo

Displays schema information of a given matching rule. This information includes the matching rule's OID, all its names, description, usage, among others. You can use the ldapschema matchingRuleList for listing all available matching rules, which you can further analyze with ldapschema matchingRuleInfo.

Use: ldapschema matchingRuleInfo matchingRule

19.36.5. ldapschema matchingRuleList

Displays a list of all matching rules found in your LDAP server.

Use: ldapschema matchingRuleList

19.36.6. ldapschema objectClassInfo

Displays schema information of a given object class. This information includes the object class OID, all its names, description, super and sub-classes, required and optional attributes, among others. You can use the ldapschema objectClassList for listing all available object classes, which you can further analyze with ldapschema objectClassInfo.

Use: ldapschema objectClassInfo objectClass

19.36.7. ldapschema objectClassInfo

Displays a list of all object classes found in your LDAP server.

Use: ldapschema objectClassList

19.36.8. ldapschema syntaxRuleInfo

Displays schema information of a given syntax rule. This information includes the syntax rule's OID, description, usage, among others. Note that different as the rest of the ldapschema subcommands, the OID is required for the syntaxRuleInfo subcommand. You can use the ldapschema syntaxRuleList for listing all available syntax rules together with their OIDs.

Use: ldapschema syntaxRuleInfo syntaxRule

19.36.9. ldapschema syntaxRuleList

Displays a list of all syntax rules found in your LDAP server.

Use: ldapschema syntaxRuleList

19.37. ldaptest

Tests LDAP filter, GUID, users, groups and organizational units as configured in sso/ldap.cf. Configuration options are described in the Configuration Guide.

Note that you can use the >>> operator for sending the command results to a given datasink. This may be useful for long command outputs for better result analysis. You can also use > for new file creation or >> for file append. Details of all terminal operators can be found in Chapter 18. Terminal Operators.
 
When troubleshooting your LDAP configuration, you should run the commands shown next in the order shown below, as some of them are based on correct configuration. E.g. ldaptest users needs a correct filter installed, so ldaptest filter should be checked first.
 
ldaptest filter
ldaptest guid
ldaptest groups
ldaptest organizationalUnits
ldaptest users
ldaptest orphans

19.37.1. ldaptest filter

Allows you to test the installed filter and prints the results.

If the -a flag is entered, requests and displays additional LDAP attributes. These must be separated by semicolon (;).

E.g., in order to display the mail, member and ou attribute values of each entry, you can enter the following:

ldaptest filter -a mail;member;ou

Use: ldaptest filter [-a]

19.37.2. ldaptest guid

Allows you to test the installed GUID and prints the results.

Use: ldaptest guid

19.37.3. ldaptest groups

Allows you to show the LDAP groups together with their attributes (in the sso/ldap.cf configuration file) that would be imported in an ldapimport execution.

If the -s (schema) flag is entered, the schema of the groups' object class is shown. This may be useful for finding out other group properties that can be entered into the ldap.cf configuration file. You can also use the ldapschema command for further exploring your object class attributes (refer to 19.36. ldapschema).

If the -a flag is entered, requests and displays additional LDAP attributes. These must be separated by semicolon (;).

E.g., in order to display the instanceType and groupType attribute values of each group, you can enter the following:

ldaptest groups -a instanceType;groupType

Use: ldaptest groups [-s] [-a]

19.37.4. ldaptest organizationalUnits

Allows you to show the LDAP organizational units together with their attributes (in the sso/ldap.cf configuration file) that would be imported in an ldapimport execution.

If the -s (schema) flag is entered, the schema of the organizational units' object class is shown. This may be useful for finding out other organizational unit properties that can be entered into the ldap.cf configuration file. You can also use the ldapschema command for further exploring your object class attributes (refer to 19.36. ldapschema).

If the -a flag is entered, requests and displays additional LDAP attributes. These must be separated by semicolon (;).

E.g., in order to display the distinguishedName and commonName attribute values of each group, you can enter the following:

ldaptest groups -a distinguishedName;commonName

Use: ldaptest organizationalUnits [-s] [-a]

19.37.5. ldaptest users

Allows you to show the LDAP users together with their attributes (in the sso/ldap.cf configuration file) that would be imported in an ldapimport execution.

If the -s (schema) flag is entered, the schema of the users' object class is shown. This may be useful for finding out other user properties that can be entered into the ldap.cf configuration file. You can also use the ldapschema command for further exploring your object class attributes (refer to 19.36. ldapschema).

If the -a flag is entered, requests and displays additional LDAP attributes. These must be separated by semicolon (;).

E.g., in order to display the memberOf and nickname attribute values of each user, you can enter the following:

ldaptest user -a memberOf;nickname

Use: ldaptest users [-s] [-a]

19.37.6. ldaptest orphans

Your LDAP filter should return all (and only!) your users, groups and organizational units. If more nodes are returned, or if the mappings in ldap.cf are not correct, nodes are returned that can not be mapped to a user, a group or an organizational unit. These are called LDAP orphans. In a correct installation and configuration, there should not be any LDAP orphans. Thus, you get LDAP orphans when you return ''to much'' from your LDAP filter. You can easily list all LDAP orphans with this terminal command.

If the -a flag is entered, requests and displays additional LDAP attributes. These must be separated by semicolon (;).

Use: ldaptest orphans [-a]

19.38. listlogfiles

Displays a list of the log files in the catalina.home path. If you need to explicitly set the log file path, you can use the logdir setting in the main.cf configuration file.

You can specify the sorting column(s) by a semicolon-separated list of column numbers in the -s option. Allowed are values 1, 2, and 3 for the first (filename), second (last modified) and third (size) columns, respectively. If you need to sort a given column in descending order, you can enter a - prefix in front of the column's index. Default sorting is by filename (ascending order).

Further, you can use Java regular expressions (https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/regex/Pattern.html) for filtering files.

The example below lists all log files starting with ''reportserver'' and sorting them by size in descending order.

listlogfiles -s -3 -f "reportserver.*"

The example below lists all log files starting with ''reportserver'' and sorting them by size in descending order, followed by filename in ascending order.

listlogfiles -s -3;1 -f "reportserver.*"

Further, you can use the -e option if you want to send the (filtered) log files via e-mail. For example, the following allows you to ZIP and send all log files starting with ''reportserver'' to the current user via e-mail.

listlogfiles -s -3;1 -f "reportserver.*" -e

If you need to send the (filtered) log files to any datasink, you can use the -d option for this. You can use any object resolver query to locate the specific datasink. Refer to Section 15.5. Object Resolver for more details of object resolver queries. For example, the following allows you to ZIP and send all log files starting with ''reportserver'' to the datasink with id 123.

listlogfiles -s -3;1 -f "reportserver.*" -d id:DatasinkDefinition:123

Note that you can display the last n lines of a given log file with the viewlogfile command described in Section 19.67. viewlogfile.

Use: listlogfiles [-s] [-f] [-e] [-d]

19.39. listpath

Enables the display of the object path. So "listpath ." returns the current path. By entering the -i flag you control the output either by displaying the path using the object name, or by IDs. ReportServer internally saves paths via object IDs, as they are unique in contrast to object names.

Use: listpath [-i] Object

19.40. locate

The locate command searches for objects matching a given expression / object resolver query. The optional argument -t allows to filter the results to a given object type.

Note that, in order to find an object, your user has to have at least read permissions on that object and module read permissions on the management module where this object resides. E.g.: to find a specific report, the user has to have at least read permissions on the report and read permissions on the report management module. Check Chapter 3. User and Permission Management for more information on permissions.

Expressions can be an object id or object name / key. The object name / key may contain wildcards. E.g.:

locate 123 Locates the object with id 123.
locate my*Report Locates all objects matching the "my*Report" expression. E.g. it matches both "myJasperReport" and "myTableReport".
locate -t TableReport my*Report Locates all objects matching the "my*Report" expression of type "TableReport". I.e. it filters the results to return only TableReports.

You can further use an object resolver query to locate the specific entity or group of entities. Refer to Section 15.5. Object Resolver for more details of object resolver queries. Note that when you use an object resolver query, the -t option is ignored. Quotation marks are needed when the object resolver query contains spaces. E.g.:

reportserver$ locate "hql:from TableReport where id=123"
Report Root/Dynamic Lists/myReport (Reportmanager)

Use: locate [-t] expression

19.41. ls

Displaying files in the respective folder By entering ls -l additional information per file will be displayed.

Use: ls [-l] path

19.42. meminfo

Displays the storage utilization of ReportServer.

Used Memory your current memory usage
Free Memory the amount of free memory in the Java Virtual Machine
Total Memory the total amount of memory in the Java Virtual Machine. The value returned by this method may vary over time, depending on the host environment.
Max Memory the maximum amount of memory that the Java virtual machine will attempt to use.

Use: meminfo

19.43. mkdir

Enables to create a new folder.

Use: mkdir folder name

19.44. mv

Enables to move files to a target folder. By using wildcards (e. g. prefix*) you can move several objects.

Note that report variants can also be moved into another report.

Use: mv source file target folder

19.45. onedrive

Provides easy access to the Microsoft graph API. onedrive is a three-tiered command. Subcommands loosely group subsubcommands depending on which kind of onedrive-object those commands deal with. Subsubcommands can be of simple or more complex nature depending on their purpose. All commands use a OneDrive - SharePoint (O365) datasink for configuration and require certain permissions to use the graph API. Should permissions granted by the accesstoken of the onedrive datasink be insufficient you can supply an optional accesstoken which will be used instead of the default one.

Here, datasink is an an object resolver query that returns exactly one onedrive datasink object. Refer to Section 15.5. Object Resolver for more information on object resolver queries.

group getmygroups fetches and displays all OneDrive groups you belong to. This requires the permission Group.Read.All
group getdrivesof fetches and displays information of all available drive objects of a given group. This requires the permission Sites.Read.All

Use: onedrive group getmygroups datasink [accesstoken]

Use: onedrive group getdrivesof groupid datasink [accesstoken]

19.46. pkg

Command to install ReportServer packages.

list Lists all available packages in the local filesystem.
install Install a package. Use flag -d to install package from local file system.

19.47. properties

Enables to observe and modify the properties map. The following subcommands are available.

19.47.1. properties clear

Removes all entries from the properties.

Use: properties clear

19.47.2. properties contains

Returns true if the properties map contains a mapping for the specified key.

Use: properties contains key

19.47.3. properties list

Lists all entries in the properties map.

Use: properties list

19.47.4. properties put

Adds a new entry to the properties map. If the key specified already exists in the map, it modifies the corresponding entry.

Use: properties put key value

19.47.5. properties remove

Removes a specific entry from the properties map.

Use: properties remove key

19.48. ps

Displays scripts which are currently running (and monitored). By entering the kill command you can interrupt executions.

Use: ps

19.49. pwd

Displays the current path. By entering the -i flag, the path will be given in ID presentation.

Use: pwd [-i]

19.50. rcondition

rcondition determines dynamic lists as a basis for conditional scheduling (refer also to Chapter 13. Scheduling of Reports). The following sub-commands are available:

create Creates a new condition.

The syntax for creating a new condition is

rcondition create conditionReport name [key] [description]

Here, conditionReport can be either the id or an object resolver query that returns exactly one dynamic report variant. Refer to Section 15.5. Object Resolver for more information on object resolver queries. Name and description identify the same, while key is unique to the condition. Please observe to set enclosing quotation marks if spaces are included in object resolver queries, names, keys and descriptions, e.g. "This is a description".

list Displays a list of the reports marked as conditions.
remove Allows to remove a condition.

Use: rcondition remove condition

Here, condition can be either the id or an object resolver query of the condition to remove. E.g.:

rcondition remove id:Condition:123

19.51. registry

Enables to observe and modify the registry. The following subcommands are available.

19.51.1. registry clear

Removes all entries from the registry.

Use: registry clear

19.51.2. registry contains

Returns true if the registry contains a mapping for the specified key.

Use: registry contains key

19.51.3. registry list

Lists all entries in the registry.

Use: registry list

19.51.4. registry put

Adds a new entry to the registry. If the key specified already exists in the registry, it modifies the corresponding entry.

Use: registry put key value

19.51.5. registry remove

Removes a specific entry from the registry.

Use: registry remove key

19.52. reportmod

Enables to set and readout report properties (ReportProperty). They can, for instance, be used in connection with scripts and enhancements to save data with the report. In addition, it enables to set the unique report UUID.

19.53. rev

By entering the rev command you will have access to saved object versions.The list sub-command lists all existing versions of an object. The restore command enables to restore a former version of an object.

list Lists all revisions of an object.
restore Restores an old version of an object

Example:

rev list id:TableReport:123 Lists all revisions of an the dynamic list with id 123.
rev restore id:TableReport:123 456 /reportmanager/test Restores the revision 456 from the dynamic list 123 into /reportmanager/test

19.54. rm

Enables to delete files/objects. To recursively delete folders (which are not empty), the -r has to be added.

Note that report variants can also be removed. By using wildcards (e. g. prefix*) you can remove several objects.

Use: rm [-r] object

19.55. rpull

Allows to pull entities from a remote RS server into a local RS installation, e.g. from PROD to TEST.

You can always use the terminal for finding out the exact path of the node you want to import. Just navigate to the node you want to export in the remote RS and enter the pwd command as described in Section 19.49.

19.55.1. rpull copy

Allows to copy the fetched entities from a remote RS server into a local RS, e.g. from PROD to TEST. The arguments required are the following:

remoteServer Object resolver query of the remote server, e.g. /remoteservers/PROD. Refer to Section 15.5. Object Resolver for more information on object resolver queries and to Section 12. Remote RS Servers for more information on remote RS servers.
remoteEntityPath The entity path in the remote RS server, e.g. /usermanager/ClassicModelCars. This path can be a directory or a single entity.
localTarget The target in the local RS where the remote entities should be copied to, e.g. /usermanager/import
-c optional flag to indicate that a check has to be performed instead of the real import. It displays all problems found, i.e. it allows to pre-check the real entity import and look for potential errors.
-v optional flag to indicate that report variants should also be imported from the remote RS server. While importing other objects other than reports, this flag is ignored.

You can always use the terminal for finding out the exact path of the node you want to import. Just navigate to the node you want to export in the remote RS and enter the pwd command as described in Section 19.49.

In order to be able to import remote entities, all keys must be set in the remote entities.
 
In order to be able to import remote reports, their datasources must be able to be mapped to local datasources. This mapping is defined via the /etc/main/rssync.cf configuration file. Details can be found in the Configuration Guide.
 

The example below imports users from the remote RS REST server named PROD. The entities are being imported from the remote location /usermanager/ClassicModelCars into the locale location /usermanager/import.

rpull copy /remoteservers/PROD /usermanager/ClassicModelCars /usermanager/import

The example below imports reports, including variants, from the remote RS REST server named PROD. The entities are being imported from the remote location /reportmanager/myreports into the locale location /reportmanager/import.

rpull copy -v /remoteservers/PROD /reportmanager/myreports /reportmanager/import

The example below performs a check without performing the real import of the example above:

rpull copy -c -v /remoteservers/PROD /reportmanager/myreports /reportmanager/import

The example below imports files from the remote RS REST server named PROD. The entities are being imported from the remote location /fileserver/myfiles into the locale location /fileserver/import.

rpull copy /remoteservers/PROD /usermanager/ClassicModelCars /usermanager/import

Use: rpull copy [-c] [-v] remoteServer remoteEntityPath localTarget

19.56. scheduleScript

Enables to execute timer controlled scripts. "scheduleScript list" delivers a list showing the currently scheduled scripts. scheduleScript execute allows to enter further dispositions.

Note that the command scheduler list shows all scheduler jobs, and scheduler remove jobid allows you to remove current jobs. Refer to the scheduler documentation in Section 19.57. for more details on this command.

To schedule scripts use the following syntax:

scheduleScript execute script scriptArguments expression

Here, script is the object reference of a script. Expression determines the scheduling sequence. Please find here some examples:

scheduleScript execute myScript.groovy " " today at 15:23
scheduleScript execute myScript.groovy " " every day at 15:23
scheduleScript execute myScript.groovy " " at 23.08.2012 15:23
scheduleScript execute myScript.groovy " " every workday at 15:23 starting on 15.03.2011 
		for 10 times
scheduleScript execute myScript.groovy " " every hour at 23 for 10 times
scheduleScript execute myScript.groovy " " today between 16:00 and 23:00 every 10 minutes
scheduleScript execute myScript.groovy " " every week on monday and wednesday at 23:12 
		starting on 27.09.2011 until 28.11.2012
scheduleScript execute myScript.groovy " " every month on day 2 at 12:12 starting on 
		27.09.2011 11:25 for 2 times

19.57. scheduler

The superordinate scheduler command includes the following commands to control the scheduler.

daemon Enables to start and stop the scheduler. disable will stop the scheduler and prevent it to restart in case of a ReportServer restart. Commands prefixed by wd refer to Watchdog which is integrated in the Scheduler. For further information on this refer to the Developer manual.
Use: scheduler daemon [start, stop, restart, enable, disable, status, wdstatus, wdshutdown, 
	wdstart, wdrestart]
list Lists jobid, type and nextFireTime.
Use: scheduler list
listFireTimes Lists the upcoming fire times for a given jobid for the next numberofFireTimes. If numberofFireTimes is not specified default is 10.
Use: scheduler listFireTimes jobid numberofFireTimes
remove Deletes a job with given jobid from the dispositions.
Use: scheduler remove jobid
replaceUser Replaces an old user with a new user in all owners, executors, scheduled-by and recipients of all active scheduler jobs. The old and new users can be addressed with an object resolver query. Refer to Section 15.5. Object Resolver for more information on object resolver queries.

The following example replaces the old user with ID 123 with the new user with ID 456.

scheduler replaceUser id:User:123 id:User:456

Use: scheduler replaceUser oldUser newUser
unschedule Cancels a job with given jobid from the dispositions.
scheduler unschedule jobid

19.58. sql

The SQL command enables to directly access a relational database to run normal SQL commands with the user filed in the object. By calling up bye you leave the console. A query always displays 100 result lines each. With ENTER you can browse through the results.

Here, datasource is an object resolver query that returns exactly one datasource of type DatabaseDatasource. Refer to Section 15.5. Object Resolver for more information on object resolver queries.

Example

reportserver$ cd "/datasources/internal datasources/"
reportserver$ sql "ReportServer Data Source"
> SELECT COUNT(*) FROM RS_AUDIT_LOG_ENTRY
COUNT(*)
27783
> bye
Good Bye

Use: sql datasource

19.59. tableExists

Checks if a given table exists in a given datasource. Here, datasource is an object resolver query that returns exactly one datasource. Refer to Section 15.5. Object Resolver for more information on object resolver queries.

The following example checks if the ''T_AGG_CUSTOMER'' table exists in the datasource with id 123.

tableExists id:DatasourceDefinition:123 T_AGG_CUSTOMER

Use: tableExists datasource table

19.60. ssltest

Allows you to test your SSL configuration. For example, the following allows you to test a HTTPS connection to www.google.com:

ssltest www.google.com 443

In case you installed a server's certificate, for example for LDAPS or LDAP StartTLS, this command is useful for testing the installed certificate analogously as shown below:

ssltest ipOrHostOfYourServer 10389

Use: ssltest host port

19.61. teamspacemod

Enables to change TeamSpaces. The following subcommands are available:

addmembers Add or removes members to or from a TeamSpace. These can be either users or groups.

The syntax for adding/removing members to/from a TeamSpace is

teamspacemod addmembers [-c] teamspace [members] [members...]

Here, teamspace is an object resolver query that returns exactly one TeamSpace. Refer to Section 15.5. Object Resolver for more information on object resolver queries. If the optional parameter -c is given, the given members are removed instead of added. If no members are given, all members are being deleted from the TeamSpace member list. Finally, the members list parameters refer to one or more object resolver queries that return a user or a group. All these users and groups are being added to the given TeamSpace as guests. Usage examples:

Deletes all members from the TeamSpace with id 123:

teamspacemod addmembers -c id:TeamSpace:123

Deletes one member from the TeamSpace with id 123:

teamspacemod addmembers -c id:TeamSpace:123 id:User:456

Adds three members (two users and one group) to the TeamSpace with id 123:

teamspacemod addmembers id:TeamSpace:123 id:User:456 "hql:from Group where id=789" "/usermanager/myOU/myUser"

setrole Change a users role in the TeamSpace

19.62. unzip

Enables to unpack files presented in ZIP format.

Use: unzip file

19.63. updateAlias

Reloads the alias configuration. Refer also to section ,,Terminal''.

Use: updateAlias

19.64. updatedb

Updates the search index. This may take some minutes depending on the data volume.

Use: updatedb

19.65. usermod

Enables to set UserProperties. They can be used for enhancements. For further information refer to the Script/Developer manual.

Use: usermod setproperty theProperty theValue theUser

19.66. variantTest

Allows to create a PDF document containing a test analysis of an execution of a given variant or a given report. In case of errors, it shows the error details in the result.

For base reports, the report query is shown in the test results if the user has ''Administration'' and ''Report management'' generic permissions.

For reports using database bundles, the command also allow to specify which datasources should be tested. All datasources given in the command must be valid for the given report, i.e. it is not allowed to test a variant with a datasource not used in the corresponding report.

In order to select the reports and the datasources, you can use object resolver queries. Refer to Section 15.5. Object Resolver for more details on this. Note that the report queries must resolve to exactly one Report entity and the datasource queries to one or more DatasourceDefinitions.

Examples:

variantTest id:Report:123

The report or variant with ID 123 is being tested.

variantTest id:DatasourceDefinition:456 id:DatasourceDefinition:789 id:Report:123

The report or variant with ID 123 is being tested with datasources 456 and 789. Both datasources must be part of the datasource bundle the report uses.

variantTest "hql:from DatasourceDefinition where name='PROD'" id:Report:123

The report or variant with ID 123 is being tested with the datasource named PROD. This datasource must be part of the datasource bundle the report uses.

Use: variantTest [datasource] [datasource...] report

19.67. viewlogfile

Displays the last n lines of a given log file in the catalina.home path. If you need to explicitly set the log file path, you can use the logdir setting in the main.cf configuration file.

The example below shows the ''reportserver.log'' file.

viewlogfile reportserver.log

Note that you can list, filter and send via e-mail or any datasink the complete log files using the listlogfiles command described in Section 19.38. listlogfiles.

Use: viewlogfile logFilename

19.68. xslt

Enables to perform an XSL transformation. Here, stylesheet input and output are FileServer files.

Example: xslt T_AGG_EMPLOYEE.xsl T_AGG_EMPLOYEE_input.html T_AGG_EMPLOYEE_result.xml

You can find the result and the example files here: https://github.com/infofabrik/reportserver-samples/tree/main/src/net/datenwerke/rs/samples/templates/xslt.

Use: xslt stylesheet input outpu

19.69. zip

Enables to compress and pack files into a zip archive. The input list is a space-separated list of files or/and directories.

Example:

zip myfile.zip 1.groovy 2.groovy etc 3.groovy

Zips three files and one directory into myfile.zip.

Use: zip outputFile.zip inputList

InfoFabrik GmbH

Wir wollen, dass alle Unternehmen, Institutionen und Organisationen, die Daten auswerten, selbständig und zeitnah genau die Informationen erhalten, die sie für ein erfolgreiches Arbeiten benötigen.

InfoFabrik GmbH
Klingholzstr. 7
65189 Wiesbaden
Germany

+49 (0) 611 580 66 25

Kontaktieren Sie uns

Bitte rechnen Sie 9 plus 3.
Copyright 2007 - 2024 InfoFabrik GmbH. All Rights Reserved.

Auf unserer Website setzen wir Cookies und andere Technologien ein. Während einige davon essenziell sind, dienen andere dazu, die Website zu verbessern und den Erfolg unserer Kampagnen zu bewerten. Bei der Nutzung unserer Website werden Daten verarbeitet, um Anzeigen und Inhalte zu messen. Weitere Informationen dazu finden Sie in unserer Datenschutzerklärung. Sie haben jederzeit die Möglichkeit, Ihre Einstellungen anzupassen oder zu widerrufen.

Datenschutzerklärung Impressum
You are using an outdated browser. The website may not be displayed correctly. Close