gitbucket-fess-plugin: Search All of GitBucket on Fess
Users can search for:
- Code in all public repositories
- Code in private repositories where you can access
- Issues / Pull requests
- Wiki pages
- open source software provided under Apache license
- easy to deploy
- powerful search engine based on Elasticsearch
- can index/search documents in 30+ languages
- supports many file formats (pdf, MS Office, etc.)
|Plugin version||GitBucket version||Fess version||jar File|
gitbucket-fess-plugin jar file from the release page and put it into
~/.gitbucket/plugins (If the directory does not exist, create it by hand).
After the installation, the admin user sets up both of GitBucket and Fess.
The flow of the setting is like the following:
- Step 1. Run GitBucket and Fess
- Step 2. Generate Token for Crawler @GitBucket
- Step 3. Set up Crawler @Fess
- Step 4. Run Crawler @Fess
- Step 5. Register Fess's URL @GitBucket
Step 1. Run GitBucket and Fess
$ java -jar gitbucket.war --port=8080 --prefix=gitbucket
$ ./bin/fess -Dfess.port=8081 -Dfess.context.path=/fess/
Step 2. Generate Token for Crawler @GitBucket
http://[GitBucket URL]/[User Name]/_application as a GitBucket's admin user and generate an access token. This token will be used by crawlers of Fess.
Step 3. Set up Crawler @Fess
http://[Fess URL]/admin/dataconfig/ as a Fess's admin user and set up a data crawler.
Then, fill each form as below:
- Name: String
- Handler Name: GitBucketDataStore
url=http://[GitBucket URL] token=[GitBucket's Token generated in Step 2]
You don't have to change other values.
After you create a configuration successfully, you can find it in
http://[Fess URL]/admin/dataconfig/. Then, click it and create a new crawling job.
You can set up a crawling schedule here, but you can also change it later. Thus, if you are new to Fess, you don't need to change any value here. (For more details about scheduling, see a later section.)
Step 4. Run Crawler @Fess
http://[Fess URL]/admin/scheduler/. Then, you will find the job created in Step 3 on the top of the list. Choose and start it.
If a crawler starts successfully, the status of the job scheduler becomes Running like the following:
Crawling process takes time, depending on a number of contents in GitBucket. After the crawling job finishes, you can search GitBucket's contents on Fess.
Step 5. Register Fess's URL @GitBucket
This is the final step. Access to
http://[GitBucket URL]/fess/settings as an admin user and register the Fess URL.
Then, global search will be enabled!
Private Repository Search
If the admin user registers a Fess's access token at
http://[GitBucket URL]/fess/settings, users can explore their private repositories.
How to generate Fess's access token
http://[Fess URL]/admin/accesstoken/ as an admin user and click the
Create New button.
Next, fill in the form like the following:
- Name: String
- Permissions: empty
- Parameter Name:
Then, access token will be created.
By setting up job schedulers on Fess, you can run data crawlers periodically to keep search results up to date.
http://[FessURL]/admin/scheduler/ and choose a crawler job. Then, you can setting up the value of
Schedule in Cron-format.
$ ./sbt.sh package
sbt.bat instead on Windows.
$ ./sbt.sh scalafmt