Setting Permissions on Your Playbooks
A playbook is only as reliable as the last person who edited it. If any reviewer on your team can open a playbook and change a rule, you will, within months, have a playbook nobody fully trusts. Rules will drift. Positions will be softened quietly to make a rule fire less often. Edits will be made to fix an edge case in one deal, and those edits will silently apply to every deal afterwards.
Permissions exist to stop that. The mechanics are simple. The discipline they enforce is what separates a playbook that holds up for two years from one that has to be rebuilt in six months.
This chapter covers how to configure playbook permissions in SimpleAI, why the Reviewer and Admin split matters for quality, and how to give Reviewers a proper channel for flagging issues so that locking down editing does not create a bottleneck.
The short version
Give most of your team Reviewer access. They can run playbooks, read rules, and use outputs, but they cannot change anything. Give a small, named group Admin access. They are the only people who can edit rules. Set up a clear channel for Reviewers to report issues so feedback reaches the Admins without friction. Refer to the Playbook Governance chapter for the broader ownership and review cadence.
The two roles
SimpleAI's playbook permissions come down to two roles. That is deliberate. More granular roles sound appealing but tend to produce confusion about who is responsible for what.
Reviewer. Can open any playbook they have access to, run it against a contract, read the rules, and see the outputs. Cannot edit, add, delete, or reorder rules. Cannot change rule triggers, fallback language, or risk thresholds. This is the default role for everyone on the team who uses the playbook to review contracts.
Admin. Can do everything a Reviewer can do, plus edit rules, add new rules, retire old rules, and change the playbook's structure. This role is scoped to a named group, typically the playbook owner and one or two senior lawyers who are accountable for the playbook's accuracy.
Setting this up is a single change in the admin console. Assign each user a role, save, done. The mechanics are not the hard part. Deciding who sits in each group is.
Why this matters: rule drift
The first reason to lock down editing is rule drift. It happens in a predictable pattern.
A reviewer runs a playbook against a contract. The playbook flags a clause that, in this particular deal, the team has already decided to accept. The reviewer opens the rule, softens the trigger, and moves on. Two weeks later, a different reviewer hits a similar situation and does the same thing. Three months later, the rule bears almost no resemblance to the team's actual position, but nobody is quite sure when it changed or why.
The individual edits are all defensible on the day they are made. The cumulative effect is that the playbook no longer reflects your team's negotiation posture. You have a codified version of the last six edge cases, not the considered position of your senior lawyers.
Locking down editing breaks this pattern. When a Reviewer hits an edge case, they cannot silently change the rule. They have to report it, and someone with authority has to decide whether the rule actually needs changing or whether this deal is the exception. That decision leaves an audit trail.
Why this matters: quality assurance
The second reason is quality assurance. Playbook rules are not free-text. They are instructions the AI will apply consistently across every contract that crosses your desk. A rule that looks reasonable in isolation can have unintended consequences when applied at volume.
A good edit to a playbook rule goes through at least three steps: draft, test against a sample of past contracts, and review by a second lawyer. A bad edit skips all three and ships the change live.
Most people editing a rule in the middle of a busy afternoon will skip the testing and the review. That is not a character flaw. It is what time pressure does. Permissions create the friction that forces the testing and review to happen. Admins who know they are the only ones who can change the rule tend to take more care. Reviewers who know they cannot change it tend to surface more issues for discussion, which is where most quality improvements come from anyway.
Restricting edit access is not a statement of distrust. It is quality infrastructure.
How to set it up
In the SimpleAI admin console, user role assignment is a per-playbook setting. You can give the same person Admin access on one playbook and Reviewer access on another. In practice, most teams do one of two patterns:
- Single Admin group per playbook. Each playbook has a named owner and one or two deputies who are the Admins. Everyone else is a Reviewer. Works well for teams with clear practice group ownership (Commercial, Employment, Privacy).
- Centralised Admin group across playbooks. A small legal ops or playbook governance team holds Admin rights on every playbook. Practice group leads hold Admin rights on their own playbooks. Reviewers see only Reviewer. Works well for larger teams where standardisation across playbooks matters.
Either pattern works. What does not work is everyone holding Admin rights by default. If that is where you are today, changing it is a 15-minute configuration exercise and the single highest-leverage governance improvement you can make.
The reporting mechanism: non-negotiable
Locking down edit access only works if Reviewers have a clear, low-friction channel to report issues. Without that channel, they will either work around the lock or stop surfacing issues at all. Both outcomes are worse than letting them edit.
A good reporting channel has three properties.
It is where the Reviewer already is. If they have to leave the tool, open a different app, and fill in a form, they will not use it. SimpleAI's "Report issue" action on any playbook output is the intended path. The report captures the contract, the rule, and the issue in one click, and routes it to the playbook Admin.
It closes the loop. The Reviewer who reported the issue hears back, even briefly, about what was decided. "Reviewed and will not change," "Reviewed and the rule has been tightened, see next release," "Reviewed and need more examples, please flag again if you see it." Silence kills reporting behaviour faster than anything else.
It is aggregated for the Admin. Admins should see reports as a list, not as a stream of notifications. Playbook governance meetings (see the next section) are where reports get triaged, not individual Slack messages at 9pm.
If SimpleAI's in-product reporting does not fit your team's workflow, a shared channel in Slack or Teams dedicated to playbook issues is the minimum acceptable alternative. Anything less, and the permissions model will produce a bottleneck rather than a governance improvement.
Where this fits with governance
Permissions are one of four parts of playbook governance. The other three are ownership (who is accountable for each playbook), review cadence (how often each playbook is reviewed against real deals), and change control (the process an Admin follows before an edit goes live).
This chapter covers the mechanics of access. The Playbook Governance chapter covers the rest, including the RACI for playbook ownership, quarterly review cycles, versioning, and how to handle a playbook whose accuracy has degraded over time.
Permissions without governance is just a lock. Governance without permissions is a plan nobody can enforce. You need both.
Common mistakes
Three patterns show up repeatedly on teams that are new to playbook permissions.
Giving everyone Admin to avoid blocking them. This is the default state in most pilots and it should not survive into production. If Reviewers are getting blocked, the fix is a better reporting channel, not broader edit rights.
Keeping Admin rights on retired owners. When someone leaves the team or rotates off a playbook, revoke their Admin access. Former owners editing a playbook they no longer understand is a common source of rule drift.
Treating reported issues as tickets rather than signals. Reports are not problems to be closed. They are evidence about where the playbook is wrong. A team that triages reports weekly, decides which ones reflect drift versus which ones reflect real negotiation changes, and updates the playbook accordingly, will have a playbook that gets better over time. A team that just marks reports as resolved will not.