The FileStation UI will not let you create a softlink. You will only be able to create “shortcuts,” which are either under favorites, or desktop. However, since Synology is basically an Alpine based Linux System, it is simple to log into the box through SSH and create a symlink using the command: This will easily create a softlink. However, softlinks don’t show up in the FileStation UI. So not very

Use Case I would like to monitor events from a Synology NFS mount on another system. Solution Python module to monitor and forward filesystem events to an AWS SQS queue, that my external system can monitor and act upon Install Python 3 on Synology To do this, visit the Synology Package Center, search for “python” and install   Then ssh into your Synology device and verify that python3 is available,

One of the recommendations from Keycloak, is to limit the access to the master realm, or use the system without it. However, before we do so, you must first ensure that your other realms has an administrator that can manage it. After which, we can safely disable the Master realm and manage our secondary realms using their respective administrative accounts. To do this, we should first login to the master

Most development projects rely on protected, external resources, such as databases, or rest services; and many times, for the sake of simplified testing, we add those credentials to our configuration files, which, if accidentally leaks to the wrong person, can become a painful and expensive issue. In this article, I will demonstrate how to avoid these issues in a Java, Maven development environment. Password leaks can be avoided by simply

TL;DR : Using keycloak as an IDM or LDAP Domain Aggregator Download the APS Identity Sync Extension: https://github.com/alex4u2nv/aps-ais-authority-sync/releases/download/v1.0.0/aps-identity-sync-java-1.0.0-jar-with-dependencies.jar Configure APS to Integrate with Keycloak as in the example activiti-identity-service.properties Configure Keycloak to integrate with multiple LDAP domains via User Federation service. Authenticate into APS using users that were synchronized. If Keycloak authentication is enabled, then authenticate through keycloak If other authentication methods bounded to same user ids (email address) then use

Quick Steps Walkthrough This walk through is targeted for audiences who are new to Vault, or dev ops who just need an API to develop auto deployment scripts against. A production environment should be installed and operated by a Hashicorp Vault expert. Pull and Run Pull the docker image and run it in the foreground with exposed ports 8200 using the following command: docker pull vault docker run --cap-add=IPC_LOCK -p

Extending ACS 6 docker images After you have your ACS 6 local environment running, you’re probably thinking, this is nice, but I want to deploy my favorite amps, like JS console, Alfresco Governance Services (AGS), or even custom developed amps developed with Alfresco SDK.

Get ACS 6 EA on your local environment and start exploring new features, functionalities, and services.

By default Alfresco Content Services sets a default search limit, based on ACL checks to 1000 items. In order to search for more than 1000 items, you will need to do one of two things: However, before you make these changes, you should consider the use case behind your search requirement as a global change will allow users to run some very long accidental wildcard queries. If search is to

Loop through a result set, and execute an action on the objects. This example uses

Audit Replication of Alfresco Content Services (ACS) to Elastic Search using Spring Boot and Apache Camel. This project uses a Pull/Push integration model, where the ACS audit stream is pulled from the Rest API, and pushed over to Elastic Search. Once audit data is in Elastic Search, the Kibana UI can plug in to generate dashboards and charts based on audit actions inside of Alfresco Content Services.

Apr 2018

Evasive Errors

Very Evasive Proxy Error from Nginx and Apache when uploading files that take more than 1 second to upload. However, the problem was not with Proxy Servers, nor the backend application server. The problem came from the router. TL;DR Proxy Error from Nginx, and Apache on large uploads Large file upload breaks; Large file uploads disconnect; File Upload fails if upload > 1000ms or 1 second No true fix, except