Brief Connect v2.5.x New Dashboard Infrastructure Release Notes
Release date - 12th January, 2026 (Updated 23rd April)
Supported package versions: >=2.5.4 and <2.6
This release introduces a new MongoDB-powered backend infrastructure for the Dashboard and Search pages, enabling near-instant record visibility. This infrastructure upgrade stream is separate from point-release notes and can be deployed with any 2.5.x Brief Connect application package.
New features
Dashboard powered by MongoDB (Beta)
A new MongoDB-powered backend is available for the Dashboard and Search pages, providing near-instant visibility of record changes.
Key benefits:
- Near-instant updates: Changes to records appear typically within seconds, instead of waiting for SharePoint Search to recrawl
- Improved performance: Dedicated database infrastructure for search and filtering operations
Beta status:
This feature is currently in beta. While it is stable enough for daily use:
- Some features (particularly filtering and sorting) may not work exactly as they previously did
- We will be shipping improvements over the coming weeks
- We welcome all feedback from customers on their experiences
Upgrade instructions (from v2.4.x Infrastructure)
Work with the E2 Support Team
Please complete this phase in collaboration with the E2 support team. Our team will guide you through each step and ensure a successful migration.
Change Window Required
Deployment requires a change window of several hours due to infrastructure changes and data migration. The change window begins when you start database migration as the database and ends when application is switched to new MongoDB database. Plan to perform this upgrade outside of core usage hours.
Maintenance mode must be enabled before migration started and remain active the whole deployment process.
Atlas prerequisites
MongoDB Atlas is the recommended primary database configuration. Ensure your Atlas organisation and OAuth service account (client ID/secret) are ready before starting deployment.
Alternative MongoDB primary database configurations aren't supported at this point, but the work is in progress to extend support for other options. Please contact the E2 support team if you need this option. Our team will guide you through alternative options, if available.
The following instructions apply to environments already running Brief Connect v2.4.x infrastructure with v.2.5.x application package.
Overview
Deployment consists of the following steps:
- Deploy MongoDB Infrastructure
- Enable Maintenance Mode and Migration Mode
- Run Data Migration
- Swap to MongoDB
- Update Power BI Models
- Verify Migration Success (admin verification while in maintenance mode)
- Disable Maintenance Mode
- Run Full Smoke Tests (includes rollback procedure if needed)
Maintenance mode must remain active from step 2 until step 7 is complete.
Step 1: Deploy Infrastructure
This step updates your existing Azure Resources release pipeline to deploy MongoDB Atlas (primary database) and Azure Document DB (MongoDB) (replica reporting database) alongside your current infrastructure.
Prerequisites
Before starting the infrastructure deployment:
-
Confirm your Azure Subscription contains the resource type
Microsoft.DocumentDB. If it doesn't, this resource type will need to be added. -
Ensure you have the MongoDB Atlas prerequisites in place:
- An Atlas organisation (direct in Atlas, or via the Azure Marketplace offer)
- An Atlas OAuth service account with Organisation Owner role
- The Atlas Organisation ID, plus the service account Client ID and Client Secret
- When Atlas organisation created, toggle off
Require IP Access List for the Atlas Administration APInetwork setting for it. (more details: Get Started with the Atlas Administration API: Require an IP Access List)
If you are provisioning Atlas via the Azure Marketplace SaaS offer, you may also need the Azure resource provider Microsoft.SaaS registered in your subscription.
- Contact the E2 support team to request an updated Azure Resources repository containing:
- Updated Bicep templates
- Provisioning scripts (e.g.
Scripts/ProvisionInfra.ps1)
Non-Standard Infrastructure
If your environment uses hardened infrastructure with VNets, private endpoints, or was deployed using alternative Infrastructure as Code tools (e.g., Terraform, Pulumi), please work directly with the E2 support team for tailored deployment support. The steps below apply to standard Brief Connect deployments only.
1.1 Update DevOps App Role Assignments
The deployment pipeline requires an additional roles. Update the Role Based Access Control Administrator role assignment for the "[EnvPrefix] Brief Connect DevOps" Entra ID app to include any missing roles.
- Open the Azure Resource Group for the related environment in Azure Portal
- Go to Access Control (IAM) -> Role assignments
- Find the existing Role Based Access Control Administrator assignment for "[EnvPrefix] Brief Connect DevOps"
- Edit the assignment to update the conditions:
- In the Conditions tab, click on + Select roles and principals
- Click Configure in the "Constrain roles" section
- Ensure the following roles are selected:
- Storage Account Contributor
- Key Vault Secrets User
- Key Vault Secrets Officer (new)
- Save the assignment
1.2 Update Your Release Pipeline Variables
- Open Azure DevOps and click on "Library" left menu item
- Update Variable group with Environment/Stage name
- Create/Update required variables according to Admin Guide * Configuration Management * Application-Settings guide.
- Add
azureResourcesNamePrefixvariable if not present, see guide for details. - Add MongoDB Atlas variables:
AtlasOrganizationId(Atlas organisation ID)AtlasDevOpsAppId(Atlas OAuth service account client ID)AtlasDevOpsAppSecret(Atlas OAuth service account client secret; mark as secret and preferably source from Azure Key Vault)- Save
- Open Azure DevOps and navigate to Pipelines -> Releases
- Open the Deploy Brief Connect Azure Resources release pipeline
- Click Edit
- Add next environment variables:
- Name:
infrastructureRootDirectory(artifact-relative root folder; usually_AzureResources). Scope:Release
1.3 Update release pipeline steps
- Open "Deploy Brief Connect Azure Resources" release pipeline. Click on "Edit" button.
- In Stages section add a new Stage, select "Empty job" template.
- Open stage tasks.
- Delete all existing tasks from this stage.
- Add new task. Select "PowerShell" task template. Set props to:
- Display name:
Bicep param File - Type:
File Path - Script Path:
$(System.DefaultWorkingDirectory)/$(infrastructureRootDirectory)/Scripts/Envs/Generate[EnvName]Params.ps1 -
Working Directory:
$(System.DefaultWorkingDirectory)/$(infrastructureRootDirectory)/ -
Add new task. Select "Azure CLI" task template. Set props to:
- Display name: Deploy Azure Resources
- Azure Resource Manager connection: select previously created service connector
- Script Type: PowerShell
- Script Location: Script Path
- Script Path:
$(System.DefaultWorkingDirectory)/$(infrastructureRootDirectory)/Scripts/ProvisionInfra.ps1 -
Script Arguments:
-AtlasDevOpsAppSecret (ConvertTo-SecureString "$(AtlasDevOpsAppSecret)" -AsPlainText -Force)" -
ErrorActionPreference: Stop
- Working Directory:
$(System.DefaultWorkingDirectory)/$(infrastructureRootDirectory)/ - Access service principal details in script: checked
1.4 Update Infrastructure Parameters generation script
In Azure Infrastructure repository, add or update script in /Scripts/Envs/Generate[EnvName]Params.ps1 to generate environment specific parameter files for deployment.
- Script: use Non-prod params script or Prod params script files as a base for your environment deployment script.
-
Review and update variables in the parameters file, if needed (SKU sizes for example, naming, etc). See the details below for parameters used.
Bicep/parameters-1.bicepparam (for main-1.bicep)
Parameter Name Description Type Default Value region The Azure region where the resources will be deployed. string (none) userAssignedIdentityName The name of the user-assigned managed identity to be created or used. string (none) keyVaultName The name of the Azure Key Vault to be created or used. string (none) mongoMainPasswordSecretName Key Vault secret name for the Atlas DB admin password. string (none) mongoReplicaPasswordSecretName Key Vault secret name for the Azure Document DB DB admin password (reporting/replica). string (none) mongoPasswordLength Length for newly generated passwords. int 32 mongoPasswordMinExistingLength Minimum length required to accept an existing secret without rotation. int 16 forceUpdateTag Force update tag for password generation scripts. string utcNow() Bicep/parameters-2.bicepparam (for main-2.bicep)
Parameter Name Description Type Default Value serverAppRegistrationClientId Server App Registration application Id string (none) clientAppRegistrationClientId Client App Registration application Id string (none) region The region where the resources will be deployed. string (none) existingApiAppServicePlanResourceId Optional, existing Api Web App Service Plan resource ID. Used for non-prod environments to share a service plan across multiple environments. Leave empty to create a new one. string '' existingFunctionAppServicePlanResourceId Optional, existing Function App Service Plan resource ID. Used for non-prod environments to share a service plan across multiple environments. Leave empty to create a new one. string '' useApiAppServicePlanForFunctionApp Set to true to reuse API Web App Service Plan for both - API Web App and Function App. Recommended for non-prod environments to reduce costs. bool false apiAppServicePlan SKU for the API app service plan (if creating new) object {{}} apiServicePlanName Name for the API app service plan (if creating new) string '' functionAppServicePlan SKU for the function app service plan (if creating new) object {{}} functionAppServicePlanName Name for the function app service plan (if creating new) string '' redisCacheSku The SKU of the Redis cache to deploy. object (none) redisInstanceName Name of the Redis instance. string (none) userAssignedIdentityName Name of the user-assigned managed identity. string (none) apiAppName Name of the API App. string (none) functionAppName Name of the Function App. string (none) keyVaultName Name of the Key Vault. string (none) logAnalyticsWorkspaceName Name of the Log Analytics Workspace. string (none) applicationInsightsName Name of the Application Insights resource. string (none) storageAccountWebName Name of the Storage Account for web/static site. string (none) storageAccountFuncName Name of the Storage Account for Function App. string (none) storageAccountDataName Name of the Storage Account for data. string (none) dataStorageRegion Region for data storage account. By default 'region' param value is used. string region dataMongoReplicaRegion Region for Azure Document DB (MongoDB) deployment. By default 'region' param value is used. string region mongoMainConnectionStringSecretName Key Vault secret name to store the Atlas SRV connection string placeholder. string (none) mongoMainPasswordSecretName Key Vault secret name that stores the Atlas DB admin password. string (none) mongoReplicaClusterName Azure Document DB (MongoDB) cluster name (reporting/replica). string (none) mongoReplicaAdminUsername Azure Document DB (MongoDB) administrator username. string 'mongoadmin' mongoReplicaPasswordSecretName Key Vault secret name that stores the Azure Document DB (MongoDB) admin password. string (none) mongoReplicaServerVersion Azure Document DB (MongoDB) server version. string '8.0' mongoReplicaComputeTier Azure Document DB (MongoDB) compute tier. string 'M10' mongoReplicaStorageSizeGb Azure Document DB (MongoDB) storage size in GB. int 32 mongoReplicaStorageType Azure Document DB (MongoDB) storage type. string 'PremiumSSD' mongoReplicaShardCount Azure Document DB (MongoDB) shard count. int 1 mongoReplicaHighAvailabilityMode Azure Document DB (MongoDB) high availability mode. string 'Disabled' mongoReplicaPublicNetworkAccess Azure Document DB (MongoDB) public network access. string 'Enabled' mongoReplicaAllowAllIPs Enable MongoDB firewall rule to allow all IPs (for development only). bool false mongoReplicaAllowAzureServices Enable MongoDB firewall rule to allow Azure services. bool true defaultLogsRetentionInDays Default retention in days for application logs. int 90 auditLogsInteractiveRetentionInDays Retention in days for interactive audit logs. int 180 auditLogsTotalRetentionInDays Total retention in days for audit logs (includes interactive and archived). int 730 Notes: * For
mongoPasswordLengthandmongoPasswordMinExistingLength, you may use the defaults unless your security policy requires otherwise. * MongoDB passwords are generated automatically and stored in Key Vault. You may rotate the stored secrets to values that match your security policies. * For object parameters (e.g., SKUs), see Azure documentation for required fields. * Parameters that have the same name in both .bicepparam files must share the same values.ProvisionMongoAtlas.parameters.json (for Atlas cluster provisioning)
This file configures the MongoDB Atlas cluster provisioned by
ProvisionMongoAtlas.ps1. It is generated by the same PowerShell script. TheorganizationIdandclientIdfields are sourced from the pipeline variable group (AtlasOrganizationIdandAtlasDevOpsAppId), andkeyVaultName,dbAdminPasswordSecretName, anddbConnectionStringSecretNameare shared with the Bicep parameters — these do not need separate configuration.The following parameters should be reviewed and adjusted for your environment:
Parameter Name Description organizationId Atlas organisation ID. Commonly sourced from pipeline variable AtlasOrganizationId.projectName Atlas project name. clusterName Atlas cluster name. clusterType DEDICATEDorFLEX.cloudProvider Atlas cloud provider ( AZURE,AWS,GCP).regionName Atlas region code (for example AUSTRALIA_EAST).instanceSizeName Atlas cluster tier (for example M10,M20).electableNodeCount Number of electable nodes for dedicated cluster. readOnlyNodeCount Number of read-only nodes for dedicated cluster. analyticsNodeCount Number of analytics nodes for dedicated cluster. clientId Atlas OAuth service account client ID. Commonly sourced from AtlasDevOpsAppId.dbAdminUsername Atlas DB admin username. keyVaultName Key Vault name used for secrets. dbAdminPasswordSecretName Key Vault secret name containing Atlas DB admin password. dbConnectionStringSecretName Key Vault secret name where SRV connection string placeholder is stored. waitTimeoutMinutes Timeout (minutes) to wait for cluster readiness. enableBackups Whether backups are enabled for dedicated clusters. allowCreateProjectWhenMissing If true, create Atlas project when it does not exist.allowAccessFromAnywhere If true, adds0.0.0.0/0to Atlas IP access list (development/testing only).ipAccessList Optional array of specific CIDR/IP access entries for Atlas. Notes:
- Adjust
regionNameto match your Azure deployment region. Atlas uses uppercase underscore-separated region codes (e.g.US_EAST_2,UK_SOUTH). - Set
organizationIdandclientIdfrom pipeline variables (AtlasOrganizationId,AtlasDevOpsAppId) in your parameter-generation script. - The
projectNameandclusterNamevalues are usually derived from your environment prefix ($mainPrefix). - Production network access: When
allowAccessFromAnywhereisfalse(recommended for production), the script does not add any IP access list entries. You must whitelist your app's outbound IP addresses so the Function App and API Web App can reach Atlas. Add anipAccessListarray to the JSON config — for example:"ipAccessList": [{ "cidrBlock": "20.53.x.x/32", "comment": "Function App outbound IP" }]. To find outbound IPs, go to Azure Portal → your App Service → Networking → Outbound traffic. For stronger isolation, consider Azure Private Link or VNet Integration with a NAT Gateway. - The remaining fields in the JSON (
clusterType,cloudProvider,electableNodeCount, etc.) are pre-configured with sensible defaults and typically do not need adjustment.
- Adjust
1.6 Run the Pipeline
- Save the pipeline
- Create a new release for your environment stage only
- Wait for the deployment to complete
- Verify MongoDB instances have been provisioned:
- In Azure Function Environment variables confirm the new connection string settings (e.g.
TempMongoDb*) exist and resolved as valid KeyVault links
What this deploys
The pipeline provisions MongoDB Atlas (primary database), Azure Document DB (MongoDB) (reporting) and updates your existing Azure resources (Function App, Key Vault, Storage Accounts, etc.) in place. They will not be recreated or deleted.
Activate performance patch for Azure Document DB instance
After close performance testing with Microsoft, we recommend a Microsoft-managed configuration step for Azure DocumentDB that must be enabled as part of provisioning to activate performance patch to increase cluster performance and prevent cluster degradation under Power BI data load. The patch is currently being deployed by Microsoft globally, but it's not guaranteed that it will be activated on the provisioned instance.
Secondary replica and migration performance
The migration tool will also replicate data to the secondary database during migration. This will slightly slow down the total time it takes to migrate all records.
For large datasets, we recommend temporarily upscaling both database instances during the migration window to reduce migration time:
| Database | Recommended Migration Tier | Default Tier |
|---|---|---|
| MongoDB Atlas (primary) | M30 | M20 |
| Azure DocumentDB with MongoDB (secondary replica) | M25 | M20 |
After the migration is complete, downscale both instances back to their standard tier.
Performance patch for Azure Document DB instance
We do not recommend downscaling Azure Document DB instance until the patch is applied, or further confirmation from Microsoft that the patch is deployed globally across all Azure Document DB instances in Azure Cloud.
Step 2: Enable Maintenance Mode and Migration Mode
Before migrating to MongoDB, place your Brief Connect environment into maintenance mode to prevent data loss during the migration.
Follow the steps in Maintenance Mode to enable maintenance mode:
- Open the Function App for the environment
- Go to Configuration -> Application settings
- Add or update the setting
MAINTENANCE_MODEtotrue - Add
FeatureFlags__MongoDbMigrationWriteReplicationEnabled:true - Add
FeatureFlags__MongoDbSecondaryWriteReplicationEnabled:true - Add
SkipAutoApplySharePointFieldsOnSaveConfiguration:true - Rename
TempMongoDbConnectionStringsetting toMongoDbMigrationConnectionStringwithout changing the value. - Rename
TempMongoDbPasswordsetting toMongoDbPasswordwithout changing the value. - Rename
TempMongoDbReplicationSecondaryMongoConnectionStringsetting toMongoDbReplicationSecondaryMongoConnectionStringwithout changing the value. - Rename
TempMongoDbReplicaPasswordsetting toMongoDbReplicaPasswordwithout changing the value. - Save changes and allow the app to restart
Verify maintenance mode is active:
- Regular users should see a maintenance message and cannot access the application
- System Administrators can still access the application with a maintenance banner visible
- System Administrators can see "Database Migration" button in the command bar in Admin Panel (
https://[WEB_APP_URL]/#/adminPanel)
Do not proceed until maintenance mode is confirmed active
Proceeding without maintenance mode enabled may result in data loss during migration.
Step 3: Prepare and Run Data Migration
Brief Connect includes a built-in migration tool in the Admin Panel to migrate data from Azure Storage to MongoDB.
Pre-fill Config Replication tables with data
- Log into the Brief Connect environment as a System Administrator
- Navigate to the Admin Panel (
https://[WEB_APP_URL]/#/adminPanel) - In the dropdown list of configurations find and select Active configuration
- In the command bar, click Save and wait until the action is completed.
Open the Data Migration panel
- Log into the Brief Connect environment as a System Administrator
- Navigate to the Admin Panel (
https://[WEB_APP_URL]/#/adminPanel) - In the command bar, click Database Migration
Note
The "Database Migration" button is only visible when the required migration environment variables are configured.
Note
Validate all items in Migration checks are green and checked.
Start the migration
- In the Selection Panel, select the tables and blob containers to migrate
-
All tables and containers should be selected
-
Click the Start button
- Review the confirmation dialog showing the number of items to migrate
- Click Start to begin the migration
Monitor migration progress
The Data Migration panel displays real-time progress:
- Migration Summary Card: Shows the current run ID, elapsed time, and overall state
- Progress Display: Shows tables completed, in progress, and failed, as well as total rows migrated
- Diagnostics Panel: Shows connection status, performance metrics, and any warnings
The panel auto-refreshes during migration. You can also click Refresh to manually update the status.
Tip
Click Show Detailed Progress to view per-table migration metrics.
Wait for completion
The migration is complete when:
- All tables show as "Completed" in the progress display
- All blob containers show as "Completed"
- The overall migration state shows "Completed"
If any items fail, review the error details in the Diagnostics Panel before proceeding.
Step 4: Swap to MongoDB
- Open the Function App for the environment
- Stop the application
- Go to Configuration -> Application settings
- Rename
MongoDbMigrationConnectionStringsetting toMongoDbConnectionStringwithout changing the value. - Confirm
MongoDbPasswordis present and still points to the same Key Vault secret. - Confirm
MongoDbReplicationSecondaryMongoConnectionStringandMongoDbReplicaPasswordare present (renamed in Step 2). - Delete
FeatureFlags__MongoDbMigrationWriteReplicationEnable. - Delete or revert
SkipAutoApplySharePointFieldsOnSaveConfigurationto pre-migration value. - Save changes
- Open the API Web App for the environment
- Stop the application
- Go to Configuration -> Application settings
- Rename
TempMongoDbConnectionStringsetting toMongoDbConnectionStringwithout changing the value. - Rename
TempMongoDbPasswordsetting toMongoDbPasswordwithout changing the value. - Rename
TempMongoDbReplicationSecondaryMongoConnectionStringsetting toMongoDbReplicationSecondaryMongoConnectionStringwithout changing the value. - Rename
TempMongoDbReplicaPasswordsetting toMongoDbReplicaPasswordwithout changing the value. - Save changes
- Go to Redis Cache instance and Flush all data in Cache
- Go back to the Function App for the environment and start it.
Step 5: Update Power BI Models
After the data migration is complete and the app is swapped to MongoDB, the Power BI models need to be updated to connect to the new MongoDB data source.
E2 Support Team Responsibility
The E2 support team will prepare updated Power BI models in advance of your scheduled deployment and provide support during the upgrade to complete this step. No action is required from your team * the E2 support team will handle the Power BI model update.
Data API Preview Feature
Power BI connects to MongoDB using the Azure Cosmos DB for MongoDB vCore connector. This requires the Data API preview feature to be enabled on your Azure Cosmos DB for MongoDB vCore (reporting database) instance.
The E2 support team will enable this feature as part of the infrastructure upgrade.
Preview Feature Considerations
Both the Data API and the Power BI connector are currently in preview. While these features are fully functional and supported for Brief Connect deployments, using preview features carries some risk (albeit small) that the service or product may change or cease to operate before reaching general availability.
Step 6: Verify Migration Success
Verification is split into two phases: admin verification while maintenance mode is active, and full smoke tests after maintenance mode is disabled.
Check migration tool status
- In the Data Migration panel, confirm the migration state shows Completed
- Review the Diagnostics Panel for any errors or warnings
- Click Download Report to download a migration report for your records
- Confirm no tables or containers show a "Failed" state
If any items failed, work with the E2 support team to resolve the issues before proceeding.
Run admin verification (while in maintenance mode)
While maintenance mode is active, System Administrators can perform initial verification to confirm the migration was successful.
| Test | Expected Result |
|---|---|
| Log in as a System Administrator | Dashboard loads successfully |
| Open the Admin Panel | Admin Panel loads with all configuration options available |
| Open an existing record | Record opens and displays all metadata correctly |
| View the Documents tab on a record | All documents are listed and can be opened |
| View the Activity Log on a record | Activity history is displayed correctly |
| View the People and Roles tab | All role assignments are displayed correctly |
| Search for a record using the search box | Search returns expected results |
| Generate a PDF Pack for a record | PDF Pack generates successfully |
If any of these tests fail, use the rollback procedure below before disabling maintenance mode.
Step 7: Disable Maintenance Mode
Once admin verification is complete:
- Open the Function App for the environment
- Go to Configuration -> Application settings
- Set
MAINTENANCE_MODEtofalse(or remove the setting entirely) - Save changes and allow the app to restart
- Verify that users can access the application normally
Step 8: Run Full Smoke Tests
After maintenance mode is disabled, run the full smoke tests to verify Brief Connect functions correctly for all users.
Be prepared to rollback
If critical issues are discovered during smoke tests, you may need to re-enable maintenance mode and execute the rollback procedure.
Configuration and access
| Test | Expected Result |
|---|---|
| Log in as a regular user | Dashboard loads successfully with appropriate records visible |
| As a proxy user, switch to view another user's dashboard | Proxy user can see the assigned user's records |
Permissions
| Test | Expected Result |
|---|---|
| As a regular user, attempt to open a record you have access to | Record opens with appropriate edit/view permissions |
| As a regular user, attempt to open a record you do not have access to | Access is denied or record is not visible |
Core functionality
| Test | Expected Result |
|---|---|
| Apply filters to the dashboard | Filters work correctly and display expected records |
| Change columns visible in the dashboard | Column changes are saved and displayed |
| Export records to Excel | Excel export generates successfully |
| Create or modify a record | Confirm changes appear on the Dashboard within seconds (not minutes) |
For a comprehensive list of smoke tests, see Basic testing of Brief Connect.
Run customer-specific verification tests
Prepare your verification tests in advance
We recommend preparing a list of customer-specific verification tests before the migration window. This ensures you can quickly verify critical functionality without extending the change window.
In addition to the standard smoke tests above, you should verify any functionality specific to your Brief Connect configuration:
- Custom workflows: Test that your configured workflows behave as expected
- Record types and subtypes: Verify records of each type can be opened and edited
- Custom fields and classifications: Confirm custom field values are displayed correctly
- Integrations: Test any integrations with external systems (e.g., webhooks, APIs)
- Reporting: Verify any reporting or Power BI connections function correctly
Document the results of your verification tests.
Rollback procedure
If critical issues are discovered and cannot be resolved, you can roll back to the previous configuration.
Rollback will discard migrated data
Rolling back reverts the environment to use Azure Table Storage. Any data written to MongoDB after the swap will not be available.
Swap Application to Table Storage
- Swap Application to Table Storage:
- Open the API Web App for the environment
- Stop the application
- Go to Configuration -> Application settings
- Rename
MongoDbConnectionStringsetting toTempMongoDbConnectionStringwithout changing the value. - Rename
MongoDbPasswordsetting toTempMongoDbPasswordwithout changing the value. - Rename
MongoDbReplicationSecondaryMongoConnectionStringsetting toTempMongoDbReplicationSecondaryMongoConnectionStringwithout changing the value. - Rename
MongoDbReplicaPasswordsetting toTempMongoDbReplicaPasswordwithout changing the value. - Remove
FeatureFlags__MongoDbMigrationWriteReplicationEnabledsetting. - Save changes
- Open the Function App for the environment
- Stop Function App
- Go to Configuration -> Application settings
- Rename
MongoDbConnectionStringsetting toTempMongoDbConnectionStringwithout changing the value. - Rename
MongoDbPasswordsetting toTempMongoDbPasswordwithout changing the value. - Rename
MongoDbReplicationSecondaryMongoConnectionStringsetting toTempMongoDbReplicationSecondaryMongoConnectionStringwithout changing the value. - Rename
MongoDbReplicaPasswordsetting toTempMongoDbReplicaPasswordwithout changing the value. - Save changes
- Go to Redis Cache instance and Flush all data in Cache
- Go back to the Function App for the environment and start it.
- Go back to the API Web App for the environment and start it.
Revert Power BI models to the previous configuration:
E2 Support Team Responsibility
The E2 support team will handle reverting the Power BI models to the previous Azure Table Storage configuration as part of the rollback process.
Contact the E2 support team if you need assistance with the rollback procedure.
Provide feedback
As this feature is in beta, we welcome all feedback on your experience:
- Report any issues with filtering, sorting, or search results to Brief Connect support
- Note any performance improvements or concerns
- Share observations about how the new behaviour compares to the previous SharePoint Search-based approach