Kaynağa Gözat

code refactoring + variable type fix

xcad 6 ay önce
ebeveyn
işleme
a67cf6e497
43 değiştirilmiş dosya ile 2323 ekleme ve 1659 silme
  1. 0 32
      AGENTS.md
  2. 3 0
      TODO.md
  3. 587 0
      cli/core/collection.py
  4. 225 60
      cli/core/config.py
  5. 96 56
      cli/core/display.py
  6. 133 0
      cli/core/exceptions.py
  7. 28 45
      cli/core/library.py
  8. 356 133
      cli/core/module.py
  9. 21 28
      cli/core/prompt.py
  10. 113 0
      cli/core/section.py
  11. 56 88
      cli/core/template.py
  12. 297 0
      cli/core/validators.py
  13. 377 0
      cli/core/variable.py
  14. 0 1178
      cli/core/variables.py
  15. 2 2
      cli/modules/compose.py
  16. 1 1
      library/compose/alloy/template.yaml
  17. 1 1
      library/compose/bind9/template.yaml
  18. 1 1
      library/compose/checkmk/template.yaml
  19. 1 1
      library/compose/clamav/template.yaml
  20. 1 1
      library/compose/dockge/template.yaml
  21. 0 8
      library/compose/gitea/template.yaml
  22. 1 1
      library/compose/gitlab-runner/template.yaml
  23. 3 3
      library/compose/gitlab/template.yaml
  24. 1 1
      library/compose/heimdall/template.yaml
  25. 1 1
      library/compose/homeassistant/template.yaml
  26. 1 1
      library/compose/homepage/template.yaml
  27. 1 1
      library/compose/influxdb/template.yaml
  28. 1 1
      library/compose/loki/template.yaml
  29. 1 1
      library/compose/mariadb/template.yaml
  30. 1 1
      library/compose/n8n/template.yaml
  31. 1 1
      library/compose/nginxproxymanager/template.yaml
  32. 1 1
      library/compose/nodeexporter/template.yaml
  33. 1 1
      library/compose/openwebui/template.yaml
  34. 1 1
      library/compose/passbolt/template.yaml
  35. 1 1
      library/compose/pihole/template.yaml
  36. 1 1
      library/compose/postgres/template.yaml
  37. 1 1
      library/compose/prometheus/template.yaml
  38. 1 1
      library/compose/promtail/template.yaml
  39. 1 1
      library/compose/teleport/template.yaml
  40. 1 1
      library/compose/twingate-connector/template.yaml
  41. 1 1
      library/compose/uptimekuma/template.yaml
  42. 1 1
      library/compose/wazuh/template.yaml
  43. 1 1
      library/compose/whoami/template.yaml

+ 0 - 32
AGENTS.md

@@ -224,38 +224,6 @@ spec:
         type: "str"
         type: "str"
 ```
 ```
 
 
-## Future Improvements
-
-### Managing TODOs as GitHub Issues
-
-We use a convention to manage TODO items as GitHub issues directly from the codebase. This allows us to track our work and link it back to the specific code that needs attention.
-
-The format for a TODO item is:
-
-`TODO[<issue-number>-<slug>] <description>`
-
--   `<issue-number>`: The GitHub issue number.
--   `<slug>`: A short, descriptive slug for the epic or feature.
--   `<description>`: The description of the TODO item.
-
-When you find a TODO item that has not been converted to an issue yet (i.e., it's missing the `[<issue-number>-<slug>]` part), you can create an issue for it using the `gh` CLI:
-
-```bash
-gh issue create --title "<title>" --body "<description>" --assignee "@me" --project "<project-name>" --label "<label>"
-```
-
-After creating the issue, update the TODO line in the `AGENTS.md` file with the issue number and a descriptive slug.
-
-### Work in Progress
-
-* FIXME Insufficient Error Messages for Template Loading: Error messages during template loading need improvement for better context and debugging.
-* FIXME Excessive Generic Exception Catching: Too much generic exception catching reduces debugging capability. Need to audit and make exception handlers more specific.
-* FIXME Inconsistent Logging Levels: Some important operations use `DEBUG` when they should use `INFO`, and vice versa. Need to audit all logging statements.
-* TODO Add compose deploy command to deploy a generated compose project to a local or remote docker environment
-* TODO Missing Type Hints in Some Functions: While most code has type hints, some functions are missing them, reducing IDE support and static analysis capability.
-* TODO Interactive Variable Prompt Improvements: The interactive prompt could be improved with better navigation, help text, and validation feedback.
-* TODO Better Error Recovery in Jinja2 Rendering: Improve error handling during Jinja2 template rendering with better context and suggestions.
-
 ## Best Practices for Template Development
 ## Best Practices for Template Development
 
 
 ### Template Structure
 ### Template Structure

+ 3 - 0
TODO.md

@@ -0,0 +1,3 @@
+* TODO Add compose deploy command to deploy a generated compose project to a local or remote docker environment
+* TODO Interactive Variable Prompt Improvements: The interactive prompt could be improved with better navigation, help text, and validation feedback.
+* TODO Better Error Recovery in Jinja2 Rendering: Improve error handling during Jinja2 template rendering with better context and suggestions.

+ 587 - 0
cli/core/collection.py

@@ -0,0 +1,587 @@
+from __future__ import annotations
+
+from collections import defaultdict
+from typing import Any, Dict, List, Optional, Set, Union
+import logging
+
+from .variable import Variable
+from .section import VariableSection
+
+logger = logging.getLogger(__name__)
+
+
+class VariableCollection:
+  """Manages variables grouped by sections and builds Jinja context."""
+
+  def __init__(self, spec: dict[str, Any]) -> None:
+    """Initialize VariableCollection from a specification dictionary.
+    
+    Args:
+        spec: Dictionary containing the complete variable specification structure
+              Expected format (as used in compose.py):
+              {
+                "section_key": {
+                  "title": "Section Title",
+                  "prompt": "Optional prompt text",
+                  "toggle": "optional_toggle_var_name", 
+                  "description": "Optional description",
+                  "vars": {
+                    "var_name": {
+                      "description": "Variable description",
+                      "type": "str",
+                      "default": "default_value",
+                      ...
+                    }
+                  }
+                }
+              }
+    """
+    if not isinstance(spec, dict):
+      raise ValueError("Spec must be a dictionary")
+    
+    self._sections: Dict[str, VariableSection] = {}
+    # NOTE: The _variable_map provides a flat, O(1) lookup for any variable by its name,
+    # avoiding the need to iterate through sections. It stores references to the same
+    # Variable objects contained in the _set structure.
+    self._variable_map: Dict[str, Variable] = {}
+    self._initialize_sections(spec)
+    # Validate dependencies after all sections are loaded
+    self._validate_dependencies()
+
+  def _initialize_sections(self, spec: dict[str, Any]) -> None:
+    """Initialize sections from the spec."""
+    for section_key, section_data in spec.items():
+      if not isinstance(section_data, dict):
+        continue
+      
+      section = self._create_section(section_key, section_data)
+      # Guard against None from empty YAML sections (vars: with no content)
+      vars_data = section_data.get("vars") or {}
+      self._initialize_variables(section, vars_data)
+      self._sections[section_key] = section
+    
+    # Validate all variable names are unique across sections
+    self._validate_unique_variable_names()
+
+  def _create_section(self, key: str, data: dict[str, Any]) -> VariableSection:
+    """Create a VariableSection from data."""
+    section_init_data = {
+      "key": key,
+      "title": data.get("title", key.replace("_", " ").title()),
+      "description": data.get("description"),
+      "toggle": data.get("toggle"),
+      "required": data.get("required", key == "general"),
+      "needs": data.get("needs")
+    }
+    return VariableSection(section_init_data)
+
+  def _initialize_variables(self, section: VariableSection, vars_data: dict[str, Any]) -> None:
+    """Initialize variables for a section."""
+    # Guard against None from empty YAML sections
+    if vars_data is None:
+      vars_data = {}
+    
+    for var_name, var_data in vars_data.items():
+      var_init_data = {"name": var_name, **var_data}
+      variable = Variable(var_init_data)
+      section.variables[var_name] = variable
+      # NOTE: Populate the direct lookup map for efficient access.
+      self._variable_map[var_name] = variable
+    
+    # Validate toggle variable after all variables are added
+    self._validate_section_toggle(section)
+    # TODO: Add more section-level validation:
+    #   - Validate that required sections have at least one non-toggle variable
+    #   - Validate that enum variables have non-empty options lists
+    #   - Validate that variable names follow naming conventions (e.g., lowercase_with_underscores)
+    #   - Validate that default values are compatible with their type definitions
+
+  def _validate_unique_variable_names(self) -> None:
+    """Validate that all variable names are unique across all sections."""
+    var_to_sections: Dict[str, List[str]] = defaultdict(list)
+    
+    # Build mapping of variable names to sections
+    for section_key, section in self._sections.items():
+      for var_name in section.variables:
+        var_to_sections[var_name].append(section_key)
+    
+    # Find duplicates and format error
+    duplicates = {var: sections for var, sections in var_to_sections.items() if len(sections) > 1}
+    
+    if duplicates:
+      errors = ["Variable names must be unique across all sections, but found duplicates:"]
+      errors.extend(f"  - '{var}' appears in sections: {', '.join(secs)}" for var, secs in sorted(duplicates.items()))
+      errors.append("\nPlease rename variables to be unique or consolidate them into a single section.")
+      error_msg = "\n".join(errors)
+      logger.error(error_msg)
+      raise ValueError(error_msg)
+  
+  def _validate_section_toggle(self, section: VariableSection) -> None:
+    """Validate that toggle variable is of type bool if it exists.
+    
+    If the toggle variable doesn't exist (e.g., filtered out), removes the toggle.
+    
+    Args:
+        section: The section to validate
+        
+    Raises:
+        ValueError: If toggle variable exists but is not boolean type
+    """
+    if not section.toggle:
+      return
+    
+    toggle_var = section.variables.get(section.toggle)
+    if not toggle_var:
+      # Toggle variable doesn't exist (e.g., was filtered out) - remove toggle metadata
+      section.toggle = None
+      return
+    
+    if toggle_var.type != "bool":
+      raise ValueError(
+        f"Section '{section.key}' toggle variable '{section.toggle}' must be type 'bool', "
+        f"but is type '{toggle_var.type}'"
+      )
+  
+  def _validate_dependencies(self) -> None:
+    """Validate section dependencies for cycles and missing references.
+    
+    Raises:
+        ValueError: If circular dependencies or missing section references are found
+    """
+    # Check for missing dependencies
+    for section_key, section in self._sections.items():
+      for dep in section.needs:
+        if dep not in self._sections:
+          raise ValueError(
+            f"Section '{section_key}' depends on '{dep}', but '{dep}' does not exist"
+          )
+    
+    # Check for circular dependencies using depth-first search
+    visited = set()
+    rec_stack = set()
+    
+    def has_cycle(section_key: str) -> bool:
+      visited.add(section_key)
+      rec_stack.add(section_key)
+      
+      section = self._sections[section_key]
+      for dep in section.needs:
+        if dep not in visited:
+          if has_cycle(dep):
+            return True
+        elif dep in rec_stack:
+          raise ValueError(
+            f"Circular dependency detected: '{section_key}' depends on '{dep}', "
+            f"which creates a cycle"
+          )
+      
+      rec_stack.remove(section_key)
+      return False
+    
+    for section_key in self._sections:
+      if section_key not in visited:
+        has_cycle(section_key)
+  
+  def is_section_satisfied(self, section_key: str) -> bool:
+    """Check if all dependencies for a section are satisfied.
+    
+    A dependency is satisfied if:
+    1. The dependency section exists
+    2. The dependency section is enabled (if it has a toggle)
+    
+    Args:
+        section_key: The key of the section to check
+        
+    Returns:
+        True if all dependencies are satisfied, False otherwise
+    """
+    section = self._sections.get(section_key)
+    if not section:
+      return False
+    
+    # No dependencies = always satisfied
+    if not section.needs:
+      return True
+    
+    # Check each dependency
+    for dep_key in section.needs:
+      dep_section = self._sections.get(dep_key)
+      if not dep_section:
+        logger.warning(f"Section '{section_key}' depends on missing section '{dep_key}'")
+        return False
+      
+      # Check if dependency is enabled
+      if not dep_section.is_enabled():
+        logger.debug(f"Section '{section_key}' dependency '{dep_key}' is disabled")
+        return False
+    
+    return True
+
+  def sort_sections(self) -> None:
+    """Sort sections with the following priority:
+    
+    1. Dependencies come before dependents (topological sort)
+    2. Required sections first (in their original order)
+    3. Enabled sections with satisfied dependencies next (in their original order)
+    4. Disabled sections or sections with unsatisfied dependencies last (in their original order)
+    
+    This maintains the original ordering within each group while organizing
+    sections logically for display and user interaction, and ensures that
+    sections are prompted in the correct dependency order.
+    """
+    # First, perform topological sort to respect dependencies
+    sorted_keys = self._topological_sort()
+    
+    # Then apply priority sorting within dependency groups
+    section_items = [(key, self._sections[key]) for key in sorted_keys]
+    
+    # Define sort key: (priority, original_index)
+    # Priority: 0 = required, 1 = enabled with satisfied dependencies, 2 = disabled or unsatisfied dependencies
+    def get_sort_key(item_with_index):
+      index, (key, section) = item_with_index
+      if section.required:
+        priority = 0
+      elif section.is_enabled() and self.is_section_satisfied(key):
+        priority = 1
+      else:
+        priority = 2
+      return (priority, index)
+    
+    # Sort with original index to maintain order within each priority group
+    # Note: This preserves the topological order from earlier
+    sorted_items = sorted(
+      enumerate(section_items),
+      key=get_sort_key
+    )
+    
+    # Rebuild _sections dict in new order
+    self._sections = {key: section for _, (key, section) in sorted_items}
+  
+  def _topological_sort(self) -> List[str]:
+    """Perform topological sort on sections based on dependencies using Kahn's algorithm."""
+    in_degree = {key: len(section.needs) for key, section in self._sections.items()}
+    queue = [key for key, degree in in_degree.items() if degree == 0]
+    queue.sort(key=lambda k: list(self._sections.keys()).index(k))  # Preserve original order
+    result = []
+    
+    while queue:
+      current = queue.pop(0)
+      result.append(current)
+      
+      # Update in-degree for dependent sections
+      for key, section in self._sections.items():
+        if current in section.needs:
+          in_degree[key] -= 1
+          if in_degree[key] == 0:
+            queue.append(key)
+    
+    # Fallback to original order if cycle detected
+    if len(result) != len(self._sections):
+      logger.warning("Topological sort incomplete - using original order")
+      return list(self._sections.keys())
+    
+    return result
+
+  def get_sections(self) -> Dict[str, VariableSection]:
+    """Get all sections in the collection."""
+    return self._sections.copy()
+  
+  def get_section(self, key: str) -> Optional[VariableSection]:
+    """Get a specific section by its key."""
+    return self._sections.get(key)
+  
+  def has_sections(self) -> bool:
+    """Check if the collection has any sections."""
+    return bool(self._sections)
+
+  def get_all_values(self) -> dict[str, Any]:
+    """Get all variable values as a dictionary."""
+    # NOTE: Uses _variable_map for O(1) access
+    return {name: var.convert(var.value) for name, var in self._variable_map.items()}
+  
+  def get_satisfied_values(self) -> dict[str, Any]:
+    """Get variable values only from sections with satisfied dependencies.
+    
+    This respects both toggle states and section dependencies, ensuring that:
+    - Variables from disabled sections (toggle=false) are excluded
+    - Variables from sections with unsatisfied dependencies are excluded
+    
+    Returns:
+        Dictionary of variable names to values for satisfied sections only
+    """
+    satisfied_values = {}
+    
+    for section_key, section in self._sections.items():
+      # Skip sections with unsatisfied dependencies
+      if not self.is_section_satisfied(section_key):
+        logger.debug(f"Excluding variables from section '{section_key}' - dependencies not satisfied")
+        continue
+      
+      # Skip disabled sections (toggle check)
+      if not section.is_enabled():
+        logger.debug(f"Excluding variables from section '{section_key}' - section is disabled")
+        continue
+      
+      # Include all variables from this satisfied section
+      for var_name, variable in section.variables.items():
+        satisfied_values[var_name] = variable.convert(variable.value)
+    
+    return satisfied_values
+
+  def get_sensitive_variables(self) -> Dict[str, Any]:
+    """Get only the sensitive variables with their values."""
+    return {name: var.value for name, var in self._variable_map.items() if var.sensitive and var.value}
+
+  def apply_defaults(self, defaults: dict[str, Any], origin: str = "cli") -> list[str]:
+    """Apply default values to variables, updating their origin.
+    
+    Args:
+        defaults: Dictionary mapping variable names to their default values
+        origin: Source of these defaults (e.g., 'config', 'cli')
+        
+    Returns:
+        List of variable names that were successfully updated
+    """
+    # NOTE: This method uses the _variable_map for a significant performance gain,
+    # as it allows direct O(1) lookup of variables instead of iterating
+    # through all sections to find a match.
+    successful = []
+    errors = []
+    
+    for var_name, value in defaults.items():
+      try:
+        variable = self._variable_map.get(var_name)
+        if not variable:
+          logger.warning(f"Variable '{var_name}' not found in template")
+          continue
+        
+        # Store original value before overriding (for display purposes)
+        # Only store if this is the first time config is being applied
+        if origin == "config" and not hasattr(variable, '_original_stored'):
+          variable.original_value = variable.value
+          variable._original_stored = True
+        
+        # Convert and set the new value
+        converted_value = variable.convert(value)
+        variable.value = converted_value
+        
+        # Set origin to the current source (not a chain)
+        variable.origin = origin
+        
+        successful.append(var_name)
+          
+      except ValueError as e:
+        error_msg = f"Invalid value for '{var_name}': {value} - {e}"
+        errors.append(error_msg)
+        logger.error(error_msg)
+    
+    if errors:
+      logger.warning(f"Some defaults failed to apply: {'; '.join(errors)}")
+    
+    return successful
+  
+  def validate_all(self) -> None:
+    """Validate all variables in the collection, skipping disabled and unsatisfied sections."""
+    errors: list[str] = []
+
+    for section_key, section in self._sections.items():
+      # Skip sections with unsatisfied dependencies or disabled via toggle
+      if not self.is_section_satisfied(section_key) or not section.is_enabled():
+        logger.debug(f"Skipping validation for section '{section_key}'")
+        continue
+
+      # Validate each variable in the section
+      for var_name, variable in section.variables.items():
+        try:
+          # Skip autogenerated variables when empty
+          if variable.autogenerated and not variable.value:
+            continue
+          
+          # Check required fields
+          if variable.value is None:
+            if variable.is_required():
+              errors.append(f"{section.key}.{var_name} (required - no default provided)")
+            continue
+
+          # Validate typed value
+          typed = variable.convert(variable.value)
+          if variable.type not in ("bool",) and not typed:
+            msg = f"{section.key}.{var_name}"
+            errors.append(f"{msg} (required - cannot be empty)" if variable.is_required() else f"{msg} (empty)")
+
+        except ValueError as e:
+          errors.append(f"{section.key}.{var_name} (invalid format: {e})")
+
+    if errors:
+      error_msg = "Variable validation failed: " + ", ".join(errors)
+      logger.error(error_msg)
+      raise ValueError(error_msg)
+
+  def merge(self, other_spec: Union[Dict[str, Any], 'VariableCollection'], origin: str = "override") -> 'VariableCollection':
+    """Merge another spec or VariableCollection into this one with precedence tracking.
+    
+    OPTIMIZED: Works directly on objects without dict conversions for better performance.
+    
+    The other spec/collection has higher precedence and will override values in self.
+    Creates a new VariableCollection with merged data.
+    
+    Args:
+        other_spec: Either a spec dictionary or another VariableCollection to merge
+        origin: Origin label for variables from other_spec (e.g., 'template', 'config')
+        
+    Returns:
+        New VariableCollection with merged data
+        
+    Example:
+        module_vars = VariableCollection(module_spec)
+        template_vars = module_vars.merge(template_spec, origin='template')
+        # Variables from template_spec override module_spec
+        # Origins tracked: 'module' or 'module -> template'
+    """
+    # Convert dict to VariableCollection if needed (only once)
+    if isinstance(other_spec, dict):
+      other = VariableCollection(other_spec)
+    else:
+      other = other_spec
+    
+    # Create new collection without calling __init__ (optimization)
+    merged = VariableCollection.__new__(VariableCollection)
+    merged._sections = {}
+    merged._variable_map = {}
+    
+    # First pass: clone sections from self
+    for section_key, self_section in self._sections.items():
+      if section_key in other._sections:
+        # Section exists in both - will merge
+        merged._sections[section_key] = self._merge_sections(
+          self_section, 
+          other._sections[section_key], 
+          origin
+        )
+      else:
+        # Section only in self - clone it
+        merged._sections[section_key] = self_section.clone()
+    
+    # Second pass: add sections that only exist in other
+    for section_key, other_section in other._sections.items():
+      if section_key not in merged._sections:
+        # New section from other - clone with origin update
+        merged._sections[section_key] = other_section.clone(origin_update=origin)
+    
+    # Rebuild variable map for O(1) lookups
+    for section in merged._sections.values():
+      for var_name, variable in section.variables.items():
+        merged._variable_map[var_name] = variable
+    
+    return merged
+  
+  def _merge_sections(self, self_section: VariableSection, other_section: VariableSection, origin: str) -> VariableSection:
+    """Merge two sections, with other_section taking precedence."""
+    merged_section = self_section.clone()
+    
+    # Update section metadata from other (other takes precedence)
+    for attr in ('title', 'description', 'toggle'):
+      if getattr(other_section, attr):
+        setattr(merged_section, attr, getattr(other_section, attr))
+    
+    merged_section.required = other_section.required
+    if other_section.needs:
+      merged_section.needs = other_section.needs.copy()
+    
+    # Merge variables
+    for var_name, other_var in other_section.variables.items():
+      if var_name in merged_section.variables:
+        # Variable exists in both - merge with other taking precedence
+        self_var = merged_section.variables[var_name]
+        
+        # Build update dict with ONLY explicitly provided fields from other
+        update = {'origin': origin}
+        field_map = {
+          'type': other_var.type,
+          'description': other_var.description,
+          'prompt': other_var.prompt,
+          'options': other_var.options,
+          'sensitive': other_var.sensitive,
+          'extra': other_var.extra
+        }
+        
+        # Add fields that were explicitly provided and have values
+        for field, value in field_map.items():
+          if field in other_var._explicit_fields and value:
+            update[field] = value
+        
+        # Special handling for value/default
+        if ('value' in other_var._explicit_fields or 'default' in other_var._explicit_fields) and other_var.value is not None:
+          update['value'] = other_var.value
+        
+        merged_section.variables[var_name] = self_var.clone(update=update)
+      else:
+        # New variable from other - clone with origin
+        merged_section.variables[var_name] = other_var.clone(update={'origin': origin})
+    
+    return merged_section
+  
+  def filter_to_used(self, used_variables: Set[str], keep_sensitive: bool = True) -> 'VariableCollection':
+    """Filter collection to only variables that are used (or sensitive).
+    
+    OPTIMIZED: Works directly on objects without dict conversions for better performance.
+    
+    Creates a new VariableCollection containing only the variables in used_variables.
+    Sections with no remaining variables are removed.
+    
+    Args:
+        used_variables: Set of variable names that are actually used
+        keep_sensitive: If True, also keep sensitive variables even if not in used set
+        
+    Returns:
+        New VariableCollection with filtered variables
+        
+    Example:
+        all_vars = VariableCollection(spec)
+        used_vars = all_vars.filter_to_used({'var1', 'var2', 'var3'})
+        # Only var1, var2, var3 (and any sensitive vars) remain
+    """
+    # Create new collection without calling __init__ (optimization)
+    filtered = VariableCollection.__new__(VariableCollection)
+    filtered._sections = {}
+    filtered._variable_map = {}
+    
+    # Filter each section
+    for section_key, section in self._sections.items():
+      # Create a new section with same metadata
+      filtered_section = VariableSection({
+        'key': section.key,
+        'title': section.title,
+        'description': section.description,
+        'toggle': section.toggle,
+        'required': section.required,
+        'needs': section.needs.copy() if section.needs else None,
+      })
+      
+      # Clone only the variables that should be included
+      for var_name, variable in section.variables.items():
+        # Include if used OR if sensitive (and keep_sensitive is True)
+        should_include = (
+          var_name in used_variables or 
+          (keep_sensitive and variable.sensitive)
+        )
+        
+        if should_include:
+          filtered_section.variables[var_name] = variable.clone()
+      
+      # Only add section if it has variables
+      if filtered_section.variables:
+        filtered._sections[section_key] = filtered_section
+        # Add variables to map
+        for var_name, variable in filtered_section.variables.items():
+          filtered._variable_map[var_name] = variable
+    
+    return filtered
+  
+  def get_all_variable_names(self) -> Set[str]:
+    """Get set of all variable names across all sections.
+    
+    Returns:
+        Set of all variable names
+    """
+    return set(self._variable_map.keys())

+ 225 - 60
cli/core/config.py

@@ -11,7 +11,10 @@ from typing import Any, Dict, Optional, Union
 import yaml
 import yaml
 from rich.console import Console
 from rich.console import Console
 
 
-from .variables import Variable, VariableSection, VariableCollection
+from .variable import Variable
+from .section import VariableSection
+from .collection import VariableCollection
+from .exceptions import ConfigError, ConfigValidationError, YAMLParseError
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 console = Console()
 console = Console()
@@ -19,6 +22,14 @@ console = Console()
 # Valid Python identifier pattern for variable names
 # Valid Python identifier pattern for variable names
 VALID_IDENTIFIER_PATTERN = re.compile(r'^[a-zA-Z_][a-zA-Z0-9_]*$')
 VALID_IDENTIFIER_PATTERN = re.compile(r'^[a-zA-Z_][a-zA-Z0-9_]*$')
 
 
+# Valid path pattern - prevents path traversal attempts
+VALID_PATH_PATTERN = re.compile(r'^[^\x00-\x1f<>:"|?*]+$')
+
+# Maximum allowed string lengths to prevent DOS attacks
+MAX_STRING_LENGTH = 1000
+MAX_PATH_LENGTH = 4096
+MAX_LIST_LENGTH = 100
+
 class ConfigManager:
 class ConfigManager:
     """Manages configuration for the CLI application."""
     """Manages configuration for the CLI application."""
     
     
@@ -53,6 +64,68 @@ class ConfigManager:
         self._write_config(default_config)
         self._write_config(default_config)
         logger.info(f"Created default configuration at {self.config_path}")
         logger.info(f"Created default configuration at {self.config_path}")
     
     
+    @staticmethod
+    def _validate_string_length(value: str, field_name: str, max_length: int = MAX_STRING_LENGTH) -> None:
+        """Validate string length to prevent DOS attacks.
+        
+        Args:
+            value: String value to validate
+            field_name: Name of the field for error messages
+            max_length: Maximum allowed length
+            
+        Raises:
+            ConfigValidationError: If string exceeds maximum length
+        """
+        if len(value) > max_length:
+            raise ConfigValidationError(
+                f"{field_name} exceeds maximum length of {max_length} characters "
+                f"(got {len(value)} characters)"
+            )
+    
+    @staticmethod
+    def _validate_path_string(path: str, field_name: str) -> None:
+        """Validate path string for security concerns.
+        
+        Args:
+            path: Path string to validate
+            field_name: Name of the field for error messages
+            
+        Raises:
+            ConfigValidationError: If path contains invalid characters or patterns
+        """
+        # Check length
+        if len(path) > MAX_PATH_LENGTH:
+            raise ConfigValidationError(
+                f"{field_name} exceeds maximum path length of {MAX_PATH_LENGTH} characters"
+            )
+        
+        # Check for null bytes and control characters
+        if '\x00' in path or any(ord(c) < 32 for c in path if c not in '\t\n\r'):
+            raise ConfigValidationError(
+                f"{field_name} contains invalid control characters"
+            )
+        
+        # Check for path traversal attempts
+        if '..' in path.split('/'):
+            logger.warning(f"Path '{path}' contains '..' - potential path traversal attempt")
+    
+    @staticmethod
+    def _validate_list_length(lst: list, field_name: str, max_length: int = MAX_LIST_LENGTH) -> None:
+        """Validate list length to prevent DOS attacks.
+        
+        Args:
+            lst: List to validate
+            field_name: Name of the field for error messages
+            max_length: Maximum allowed length
+            
+        Raises:
+            ConfigValidationError: If list exceeds maximum length
+        """
+        if len(lst) > max_length:
+            raise ConfigValidationError(
+                f"{field_name} exceeds maximum length of {max_length} items (got {len(lst)} items)"
+            )
+    
     def _read_config(self) -> Dict[str, Any]:
     def _read_config(self) -> Dict[str, Any]:
         """Read configuration from file.
         """Read configuration from file.
         
         
@@ -60,8 +133,9 @@ class ConfigManager:
             Dictionary containing the configuration.
             Dictionary containing the configuration.
             
             
         Raises:
         Raises:
-            yaml.YAMLError: If YAML parsing fails.
-            ValueError: If configuration structure is invalid.
+            YAMLParseError: If YAML parsing fails.
+            ConfigValidationError: If configuration structure is invalid.
+            ConfigError: If reading fails for other reasons.
         """
         """
         try:
         try:
             with open(self.config_path, 'r') as f:
             with open(self.config_path, 'r') as f:
@@ -73,13 +147,13 @@ class ConfigManager:
             return config
             return config
         except yaml.YAMLError as e:
         except yaml.YAMLError as e:
             logger.error(f"Failed to parse YAML configuration: {e}")
             logger.error(f"Failed to parse YAML configuration: {e}")
+            raise YAMLParseError(str(self.config_path), e)
+        except ConfigValidationError:
+            # Re-raise validation errors as-is
             raise
             raise
-        except ValueError as e:
-            logger.error(f"Invalid configuration structure: {e}")
-            raise
-        except Exception as e:
+        except (IOError, OSError) as e:
             logger.error(f"Failed to read configuration file: {e}")
             logger.error(f"Failed to read configuration file: {e}")
-            raise
+            raise ConfigError(f"Failed to read configuration file '{self.config_path}': {e}")
     
     
     def _write_config(self, config: Dict[str, Any]) -> None:
     def _write_config(self, config: Dict[str, Any]) -> None:
         """Write configuration to file atomically using temp file + rename pattern.
         """Write configuration to file atomically using temp file + rename pattern.
@@ -90,8 +164,10 @@ class ConfigManager:
             config: Dictionary containing the configuration to write.
             config: Dictionary containing the configuration to write.
             
             
         Raises:
         Raises:
-            ValueError: If configuration structure is invalid.
+            ConfigValidationError: If configuration structure is invalid.
+            ConfigError: If writing fails for any reason.
         """
         """
+        tmp_path = None
         try:
         try:
             # Validate config structure before writing
             # Validate config structure before writing
             self._validate_config_structure(config)
             self._validate_config_structure(config)
@@ -114,78 +190,117 @@ class ConfigManager:
             shutil.move(tmp_path, self.config_path)
             shutil.move(tmp_path, self.config_path)
             logger.debug(f"Configuration written atomically to {self.config_path}")
             logger.debug(f"Configuration written atomically to {self.config_path}")
             
             
-        except ValueError as e:
-            logger.error(f"Invalid configuration structure: {e}")
+        except ConfigValidationError:
+            # Re-raise validation errors as-is
+            if tmp_path:
+                Path(tmp_path).unlink(missing_ok=True)
             raise
             raise
-        except Exception as e:
+        except (IOError, OSError, yaml.YAMLError) as e:
             # Clean up temp file if it exists
             # Clean up temp file if it exists
-            if 'tmp_path' in locals():
+            if tmp_path:
                 try:
                 try:
                     Path(tmp_path).unlink(missing_ok=True)
                     Path(tmp_path).unlink(missing_ok=True)
-                except Exception:
-                    pass
+                except (IOError, OSError):
+                    logger.warning(f"Failed to clean up temporary file: {tmp_path}")
             logger.error(f"Failed to write configuration file: {e}")
             logger.error(f"Failed to write configuration file: {e}")
-            raise
+            raise ConfigError(f"Failed to write configuration to '{self.config_path}': {e}")
     
     
     def _validate_config_structure(self, config: Dict[str, Any]) -> None:
     def _validate_config_structure(self, config: Dict[str, Any]) -> None:
-        """Validate the configuration structure.
+        """Validate the configuration structure with comprehensive checks.
         
         
         Args:
         Args:
             config: Configuration dictionary to validate.
             config: Configuration dictionary to validate.
             
             
         Raises:
         Raises:
-            ValueError: If configuration structure is invalid.
+            ConfigValidationError: If configuration structure is invalid.
         """
         """
         if not isinstance(config, dict):
         if not isinstance(config, dict):
-            raise ValueError("Configuration must be a dictionary")
+            raise ConfigValidationError("Configuration must be a dictionary")
         
         
         # Check top-level structure
         # Check top-level structure
         if "defaults" in config and not isinstance(config["defaults"], dict):
         if "defaults" in config and not isinstance(config["defaults"], dict):
-            raise ValueError("'defaults' must be a dictionary")
+            raise ConfigValidationError("'defaults' must be a dictionary")
         
         
         if "preferences" in config and not isinstance(config["preferences"], dict):
         if "preferences" in config and not isinstance(config["preferences"], dict):
-            raise ValueError("'preferences' must be a dictionary")
+            raise ConfigValidationError("'preferences' must be a dictionary")
         
         
         # Validate defaults structure
         # Validate defaults structure
         if "defaults" in config:
         if "defaults" in config:
             for module_name, module_defaults in config["defaults"].items():
             for module_name, module_defaults in config["defaults"].items():
                 if not isinstance(module_name, str):
                 if not isinstance(module_name, str):
-                    raise ValueError(f"Module name must be a string, got {type(module_name).__name__}")
+                    raise ConfigValidationError(f"Module name must be a string, got {type(module_name).__name__}")
+                
+                # Validate module name length
+                self._validate_string_length(module_name, "Module name", max_length=100)
                 
                 
                 if not isinstance(module_defaults, dict):
                 if not isinstance(module_defaults, dict):
-                    raise ValueError(f"Defaults for module '{module_name}' must be a dictionary")
+                    raise ConfigValidationError(f"Defaults for module '{module_name}' must be a dictionary")
+                
+                # Validate number of defaults per module
+                self._validate_list_length(
+                    list(module_defaults.keys()), 
+                    f"Defaults for module '{module_name}'"
+                )
                 
                 
                 # Validate variable names are valid Python identifiers
                 # Validate variable names are valid Python identifiers
-                for var_name in module_defaults.keys():
+                for var_name, var_value in module_defaults.items():
                     if not isinstance(var_name, str):
                     if not isinstance(var_name, str):
-                        raise ValueError(f"Variable name must be a string, got {type(var_name).__name__}")
+                        raise ConfigValidationError(f"Variable name must be a string, got {type(var_name).__name__}")
+                    
+                    # Validate variable name length
+                    self._validate_string_length(var_name, "Variable name", max_length=100)
                     
                     
                     if not VALID_IDENTIFIER_PATTERN.match(var_name):
                     if not VALID_IDENTIFIER_PATTERN.match(var_name):
-                        raise ValueError(
+                        raise ConfigValidationError(
                             f"Invalid variable name '{var_name}' in module '{module_name}'. "
                             f"Invalid variable name '{var_name}' in module '{module_name}'. "
                             f"Variable names must be valid Python identifiers (letters, numbers, underscores, "
                             f"Variable names must be valid Python identifiers (letters, numbers, underscores, "
                             f"cannot start with a number)"
                             f"cannot start with a number)"
                         )
                         )
+                    
+                    # Validate variable value types and lengths
+                    if isinstance(var_value, str):
+                        self._validate_string_length(
+                            var_value, 
+                            f"Value for '{module_name}.{var_name}'"
+                        )
+                    elif isinstance(var_value, list):
+                        self._validate_list_length(
+                            var_value, 
+                            f"Value for '{module_name}.{var_name}'"
+                        )
+                    elif var_value is not None and not isinstance(var_value, (bool, int, float)):
+                        raise ConfigValidationError(
+                            f"Invalid value type for '{module_name}.{var_name}': "
+                            f"must be string, number, boolean, list, or null (got {type(var_value).__name__})"
+                        )
         
         
         # Validate preferences structure and types
         # Validate preferences structure and types
         if "preferences" in config:
         if "preferences" in config:
             preferences = config["preferences"]
             preferences = config["preferences"]
             
             
             # Validate known preference types
             # Validate known preference types
-            if "editor" in preferences and not isinstance(preferences["editor"], str):
-                raise ValueError("Preference 'editor' must be a string")
+            if "editor" in preferences:
+                if not isinstance(preferences["editor"], str):
+                    raise ConfigValidationError("Preference 'editor' must be a string")
+                self._validate_string_length(preferences["editor"], "Preference 'editor'", max_length=100)
             
             
             if "output_dir" in preferences:
             if "output_dir" in preferences:
-                if preferences["output_dir"] is not None and not isinstance(preferences["output_dir"], str):
-                    raise ValueError("Preference 'output_dir' must be a string or null")
+                output_dir = preferences["output_dir"]
+                if output_dir is not None:
+                    if not isinstance(output_dir, str):
+                        raise ConfigValidationError("Preference 'output_dir' must be a string or null")
+                    self._validate_path_string(output_dir, "Preference 'output_dir'")
             
             
             if "library_paths" in preferences:
             if "library_paths" in preferences:
                 if not isinstance(preferences["library_paths"], list):
                 if not isinstance(preferences["library_paths"], list):
-                    raise ValueError("Preference 'library_paths' must be a list")
+                    raise ConfigValidationError("Preference 'library_paths' must be a list")
+                
+                self._validate_list_length(preferences["library_paths"], "Preference 'library_paths'")
                 
                 
-                for path in preferences["library_paths"]:
+                for i, path in enumerate(preferences["library_paths"]):
                     if not isinstance(path, str):
                     if not isinstance(path, str):
-                        raise ValueError(f"Library path must be a string, got {type(path).__name__}")
+                        raise ConfigValidationError(f"Library path must be a string, got {type(path).__name__}")
+                    self._validate_path_string(path, f"Library path at index {i}")
     
     
     def get_config_path(self) -> Path:
     def get_config_path(self) -> Path:
         """Get the path to the configuration file.
         """Get the path to the configuration file.
@@ -215,7 +330,7 @@ class ConfigManager:
         return defaults.get(module_name, {})
         return defaults.get(module_name, {})
     
     
     def set_defaults(self, module_name: str, defaults: Dict[str, Any]) -> None:
     def set_defaults(self, module_name: str, defaults: Dict[str, Any]) -> None:
-        """Set default variable values for a module.
+        """Set default variable values for a module with comprehensive validation.
         
         
         Args:
         Args:
             module_name: Name of the module
             module_name: Name of the module
@@ -223,26 +338,44 @@ class ConfigManager:
                       {"var_name": "value", "var2_name": "value2"}
                       {"var_name": "value", "var2_name": "value2"}
                       
                       
         Raises:
         Raises:
-            ValueError: If module name or variable names are invalid.
+            ConfigValidationError: If module name or variable names are invalid.
         """
         """
         # Validate module name
         # Validate module name
         if not isinstance(module_name, str) or not module_name:
         if not isinstance(module_name, str) or not module_name:
-            raise ValueError("Module name must be a non-empty string")
+            raise ConfigValidationError("Module name must be a non-empty string")
+        
+        self._validate_string_length(module_name, "Module name", max_length=100)
         
         
         # Validate defaults dictionary
         # Validate defaults dictionary
         if not isinstance(defaults, dict):
         if not isinstance(defaults, dict):
-            raise ValueError("Defaults must be a dictionary")
+            raise ConfigValidationError("Defaults must be a dictionary")
+        
+        # Validate number of defaults
+        self._validate_list_length(list(defaults.keys()), "Defaults dictionary")
         
         
-        # Validate variable names
-        for var_name in defaults.keys():
+        # Validate variable names and values
+        for var_name, var_value in defaults.items():
             if not isinstance(var_name, str):
             if not isinstance(var_name, str):
-                raise ValueError(f"Variable name must be a string, got {type(var_name).__name__}")
+                raise ConfigValidationError(f"Variable name must be a string, got {type(var_name).__name__}")
+            
+            self._validate_string_length(var_name, "Variable name", max_length=100)
             
             
             if not VALID_IDENTIFIER_PATTERN.match(var_name):
             if not VALID_IDENTIFIER_PATTERN.match(var_name):
-                raise ValueError(
+                raise ConfigValidationError(
                     f"Invalid variable name '{var_name}'. Variable names must be valid Python identifiers "
                     f"Invalid variable name '{var_name}'. Variable names must be valid Python identifiers "
                     f"(letters, numbers, underscores, cannot start with a number)"
                     f"(letters, numbers, underscores, cannot start with a number)"
                 )
                 )
+            
+            # Validate value types and lengths
+            if isinstance(var_value, str):
+                self._validate_string_length(var_value, f"Value for '{var_name}'")
+            elif isinstance(var_value, list):
+                self._validate_list_length(var_value, f"Value for '{var_name}'")
+            elif var_value is not None and not isinstance(var_value, (bool, int, float)):
+                raise ConfigValidationError(
+                    f"Invalid value type for '{var_name}': "
+                    f"must be string, number, boolean, list, or null (got {type(var_value).__name__})"
+                )
         
         
         config = self._read_config()
         config = self._read_config()
         
         
@@ -254,7 +387,7 @@ class ConfigManager:
         logger.info(f"Updated defaults for module '{module_name}'")
         logger.info(f"Updated defaults for module '{module_name}'")
     
     
     def set_default_value(self, module_name: str, var_name: str, value: Any) -> None:
     def set_default_value(self, module_name: str, var_name: str, value: Any) -> None:
-        """Set a single default variable value.
+        """Set a single default variable value with comprehensive validation.
         
         
         Args:
         Args:
             module_name: Name of the module
             module_name: Name of the module
@@ -262,21 +395,36 @@ class ConfigManager:
             value: Default value to set
             value: Default value to set
             
             
         Raises:
         Raises:
-            ValueError: If module name or variable name is invalid.
+            ConfigValidationError: If module name or variable name is invalid.
         """
         """
         # Validate inputs
         # Validate inputs
         if not isinstance(module_name, str) or not module_name:
         if not isinstance(module_name, str) or not module_name:
-            raise ValueError("Module name must be a non-empty string")
+            raise ConfigValidationError("Module name must be a non-empty string")
+        
+        self._validate_string_length(module_name, "Module name", max_length=100)
         
         
         if not isinstance(var_name, str):
         if not isinstance(var_name, str):
-            raise ValueError(f"Variable name must be a string, got {type(var_name).__name__}")
+            raise ConfigValidationError(f"Variable name must be a string, got {type(var_name).__name__}")
+        
+        self._validate_string_length(var_name, "Variable name", max_length=100)
         
         
         if not VALID_IDENTIFIER_PATTERN.match(var_name):
         if not VALID_IDENTIFIER_PATTERN.match(var_name):
-            raise ValueError(
+            raise ConfigValidationError(
                 f"Invalid variable name '{var_name}'. Variable names must be valid Python identifiers "
                 f"Invalid variable name '{var_name}'. Variable names must be valid Python identifiers "
                 f"(letters, numbers, underscores, cannot start with a number)"
                 f"(letters, numbers, underscores, cannot start with a number)"
             )
             )
         
         
+        # Validate value type and length
+        if isinstance(value, str):
+            self._validate_string_length(value, f"Value for '{var_name}'")
+        elif isinstance(value, list):
+            self._validate_list_length(value, f"Value for '{var_name}'")
+        elif value is not None and not isinstance(value, (bool, int, float)):
+            raise ConfigValidationError(
+                f"Invalid value type for '{var_name}': "
+                f"must be string, number, boolean, list, or null (got {type(value).__name__})"
+            )
+        
         defaults = self.get_defaults(module_name)
         defaults = self.get_defaults(module_name)
         defaults[var_name] = value
         defaults[var_name] = value
         self.set_defaults(module_name, defaults)
         self.set_defaults(module_name, defaults)
@@ -322,33 +470,50 @@ class ConfigManager:
         return preferences.get(key)
         return preferences.get(key)
     
     
     def set_preference(self, key: str, value: Any) -> None:
     def set_preference(self, key: str, value: Any) -> None:
-        """Set a user preference value.
+        """Set a user preference value with comprehensive validation.
         
         
         Args:
         Args:
             key: Preference key
             key: Preference key
             value: Preference value
             value: Preference value
             
             
         Raises:
         Raises:
-            ValueError: If key or value is invalid for known preference types.
+            ConfigValidationError: If key or value is invalid for known preference types.
         """
         """
         # Validate key
         # Validate key
         if not isinstance(key, str) or not key:
         if not isinstance(key, str) or not key:
-            raise ValueError("Preference key must be a non-empty string")
+            raise ConfigValidationError("Preference key must be a non-empty string")
         
         
-        # Validate known preference types
-        if key == "editor" and not isinstance(value, str):
-            raise ValueError("Preference 'editor' must be a string")
+        self._validate_string_length(key, "Preference key", max_length=100)
         
         
-        if key == "output_dir":
-            if value is not None and not isinstance(value, str):
-                raise ValueError("Preference 'output_dir' must be a string or null")
-        
-        if key == "library_paths":
+        # Validate known preference types
+        if key == "editor":
+            if not isinstance(value, str):
+                raise ConfigValidationError("Preference 'editor' must be a string")
+            self._validate_string_length(value, "Preference 'editor'", max_length=100)
+        
+        elif key == "output_dir":
+            if value is not None:
+                if not isinstance(value, str):
+                    raise ConfigValidationError("Preference 'output_dir' must be a string or null")
+                self._validate_path_string(value, "Preference 'output_dir'")
+        
+        elif key == "library_paths":
             if not isinstance(value, list):
             if not isinstance(value, list):
-                raise ValueError("Preference 'library_paths' must be a list")
-            for path in value:
+                raise ConfigValidationError("Preference 'library_paths' must be a list")
+            
+            self._validate_list_length(value, "Preference 'library_paths'")
+            
+            for i, path in enumerate(value):
                 if not isinstance(path, str):
                 if not isinstance(path, str):
-                    raise ValueError(f"Library path must be a string, got {type(path).__name__}")
+                    raise ConfigValidationError(f"Library path must be a string, got {type(path).__name__}")
+                self._validate_path_string(path, f"Library path at index {i}")
+        
+        # For unknown preference keys, apply basic validation
+        else:
+            if isinstance(value, str):
+                self._validate_string_length(value, f"Preference '{key}'")
+            elif isinstance(value, list):
+                self._validate_list_length(value, f"Preference '{key}'")
         
         
         config = self._read_config()
         config = self._read_config()
         
         

+ 96 - 56
cli/core/display.py

@@ -196,7 +196,48 @@ class DisplayManager:
 
 
     def display_validation_error(self, message: str) -> None:
     def display_validation_error(self, message: str) -> None:
         """Display a validation error message."""
         """Display a validation error message."""
-        console.print(f"[red]{message}[/red]")
+        self.display_message('error', message)
+    
+    def display_message(self, level: str, message: str, context: str | None = None) -> None:
+        """Display a message with consistent formatting.
+        
+        Args:
+            level: Message level (error, warning, success, info)
+            message: The message to display
+            context: Optional context information
+        """
+        icon = IconManager.get_status_icon(level)
+        colors = {'error': 'red', 'warning': 'yellow', 'success': 'green', 'info': 'blue'}
+        color = colors.get(level, 'white')
+        
+        # Format message based on context
+        if context:
+            text = f"{level.capitalize()} in {context}: {message}" if level == 'error' or level == 'warning' else f"{context}: {message}"
+        else:
+            text = f"{level.capitalize()}: {message}" if level == 'error' or level == 'warning' else message
+        
+        console.print(f"[{color}]{icon} {text}[/{color}]")
+        
+        # Log appropriately
+        log_message = f"{context}: {message}" if context else message
+        log_methods = {'error': logger.error, 'warning': logger.warning, 'success': logger.info, 'info': logger.info}
+        log_methods.get(level, logger.info)(log_message)
+    
+    def display_error(self, message: str, context: str | None = None) -> None:
+        """Display an error message."""
+        self.display_message('error', message, context)
+    
+    def display_warning(self, message: str, context: str | None = None) -> None:
+        """Display a warning message."""
+        self.display_message('warning', message, context)
+    
+    def display_success(self, message: str, context: str | None = None) -> None:
+        """Display a success message."""
+        self.display_message('success', message, context)
+    
+    def display_info(self, message: str, context: str | None = None) -> None:
+        """Display an informational message."""
+        self.display_message('info', message, context)
 
 
     def _display_template_header(self, template: Template, template_id: str) -> None:
     def _display_template_header(self, template: Template, template_id: str) -> None:
         """Display the header for a template."""
         """Display the header for a template."""
@@ -209,38 +250,58 @@ class DisplayManager:
         )
         )
         console.print(description)
         console.print(description)
 
 
-    def _display_file_tree(self, template: Template) -> None:
-        """Display the file structure of a template."""
-        # Preserve the heading, then use the template id as the root directory label
-        console.print()
-        console.print("[bold blue]Template File Structure:[/bold blue]")
-        # Use the template id as the root directory label (folder glyph + white name)
-        file_tree = Tree(f"{IconManager.folder()} [white]{template.id}[/white]")
+    def _build_file_tree(self, root_label: str, files: list, get_file_info: callable) -> Tree:
+        """Build a file tree structure.
+        
+        Args:
+            root_label: Label for root node
+            files: List of files to display
+            get_file_info: Function that takes a file and returns (path, display_name, color, extra_text)
+        
+        Returns:
+            Tree object ready for display
+        """
+        file_tree = Tree(root_label)
         tree_nodes = {Path("."): file_tree}
         tree_nodes = {Path("."): file_tree}
-
-        for template_file in sorted(
-            template.template_files, key=lambda f: f.relative_path
-        ):
-            parts = template_file.relative_path.parts
+        
+        for file_item in sorted(files, key=lambda f: get_file_info(f)[0]):
+            path, display_name, color, extra_text = get_file_info(file_item)
+            parts = path.parts
             current_path = Path(".")
             current_path = Path(".")
             current_node = file_tree
             current_node = file_tree
-
+            
+            # Build directory structure
             for part in parts[:-1]:
             for part in parts[:-1]:
                 current_path = current_path / part
                 current_path = current_path / part
                 if current_path not in tree_nodes:
                 if current_path not in tree_nodes:
                     new_node = current_node.add(f"{IconManager.folder()} [white]{part}[/white]")
                     new_node = current_node.add(f"{IconManager.folder()} [white]{part}[/white]")
                     tree_nodes[current_path] = new_node
                     tree_nodes[current_path] = new_node
-                    current_node = new_node
-                else:
-                    current_node = tree_nodes[current_path]
-
-            # Determine display name (use output_path to detect final filename)
-            display_name = template_file.output_path.name if hasattr(template_file, 'output_path') else template_file.relative_path.name
-
-            # Get appropriate icon based on file type/name
+                current_node = tree_nodes[current_path]
+            
+            # Add file
             icon = IconManager.get_file_icon(display_name)
             icon = IconManager.get_file_icon(display_name)
-            current_node.add(f"[white]{icon} {display_name}[/white]")
-
+            file_label = f"{icon} [{color}]{display_name}[/{color}]"
+            if extra_text:
+                file_label += f" {extra_text}"
+            current_node.add(file_label)
+        
+        return file_tree
+    
+    def _display_file_tree(self, template: Template) -> None:
+        """Display the file structure of a template."""
+        console.print()
+        console.print("[bold blue]Template File Structure:[/bold blue]")
+        
+        def get_template_file_info(template_file):
+            display_name = template_file.output_path.name if hasattr(template_file, 'output_path') else template_file.relative_path.name
+            return (template_file.relative_path, display_name, 'white', None)
+        
+        file_tree = self._build_file_tree(
+            f"{IconManager.folder()} [white]{template.id}[/white]",
+            template.template_files,
+            get_template_file_info
+        )
+        
         if file_tree.children:
         if file_tree.children:
             console.print(file_tree)
             console.print(file_tree)
 
 
@@ -342,46 +403,25 @@ class DisplayManager:
         files: dict[str, str], 
         files: dict[str, str], 
         existing_files: list[Path] | None = None
         existing_files: list[Path] | None = None
     ) -> None:
     ) -> None:
-        """Display files to be generated with confirmation prompt.
-        
-        Args:
-            output_dir: The output directory path
-            files: Dictionary of file paths to content
-            existing_files: List of existing files that will be overwritten (if any)
-        """
+        """Display files to be generated with confirmation prompt."""
         console.print()
         console.print()
         console.print("[bold]Files to be generated:[/bold]")
         console.print("[bold]Files to be generated:[/bold]")
         
         
-        # Create a tree view of files
-        file_tree = Tree(f"{IconManager.folder()} [cyan]{output_dir.resolve()}[/cyan]")
-        tree_nodes = {Path("."): file_tree}
-        
-        # Sort files for better display
-        sorted_files = sorted(files.keys())
-        
-        for file_path_str in sorted_files:
+        def get_file_generation_info(file_path_str):
             file_path = Path(file_path_str)
             file_path = Path(file_path_str)
-            parts = file_path.parts
-            current_path = Path(".")
-            current_node = file_tree
-            
-            # Build directory structure
-            for part in parts[:-1]:
-                current_path = current_path / part
-                if current_path not in tree_nodes:
-                    new_node = current_node.add(f"{IconManager.folder()} [white]{part}[/white]")
-                    tree_nodes[current_path] = new_node
-                current_node = tree_nodes[current_path]
-            
-            # Add file with indicator if it will be overwritten
-            file_name = parts[-1]
+            file_name = file_path.parts[-1] if file_path.parts else file_path.name
             full_path = output_dir / file_path
             full_path = output_dir / file_path
-            icon = IconManager.get_file_icon(file_name)
             
             
             if existing_files and full_path in existing_files:
             if existing_files and full_path in existing_files:
-                current_node.add(f"{icon} [yellow]{file_name}[/yellow] [red](will overwrite)[/red]")
+                return (file_path, file_name, 'yellow', '[red](will overwrite)[/red]')
             else:
             else:
-                current_node.add(f"{icon} [green]{file_name}[/green]")
+                return (file_path, file_name, 'green', None)
+        
+        file_tree = self._build_file_tree(
+            f"{IconManager.folder()} [cyan]{output_dir.resolve()}[/cyan]",
+            files.keys(),
+            get_file_generation_info
+        )
         
         
         console.print(file_tree)
         console.print(file_tree)
         console.print()
         console.print()

+ 133 - 0
cli/core/exceptions.py

@@ -0,0 +1,133 @@
+"""Custom exception classes for the boilerplates CLI.
+
+This module defines specific exception types for better error handling
+and diagnostics throughout the application.
+"""
+
+from typing import Optional, List
+
+
+class BoilerplatesError(Exception):
+    """Base exception for all boilerplates CLI errors."""
+    pass
+
+
+class ConfigError(BoilerplatesError):
+    """Raised when configuration operations fail."""
+    pass
+
+
+class ConfigValidationError(ConfigError):
+    """Raised when configuration validation fails."""
+    pass
+
+
+class TemplateError(BoilerplatesError):
+    """Base exception for template-related errors."""
+    pass
+
+
+class TemplateNotFoundError(TemplateError):
+    """Raised when a template cannot be found."""
+    
+    def __init__(self, template_id: str, module_name: Optional[str] = None):
+        self.template_id = template_id
+        self.module_name = module_name
+        msg = f"Template '{template_id}' not found"
+        if module_name:
+            msg += f" in module '{module_name}'"
+        super().__init__(msg)
+
+
+class TemplateLoadError(TemplateError):
+    """Raised when a template fails to load."""
+    pass
+
+
+class TemplateSyntaxError(TemplateError):
+    """Raised when a Jinja2 template has syntax errors."""
+    
+    def __init__(self, template_id: str, errors: List[str]):
+        self.template_id = template_id
+        self.errors = errors
+        msg = f"Jinja2 syntax errors in template '{template_id}':\n" + "\n".join(errors)
+        super().__init__(msg)
+
+
+class TemplateValidationError(TemplateError):
+    """Raised when template validation fails."""
+    pass
+
+
+class TemplateRenderError(TemplateError):
+    """Raised when template rendering fails."""
+    pass
+
+
+class VariableError(BoilerplatesError):
+    """Base exception for variable-related errors."""
+    pass
+
+
+class VariableValidationError(VariableError):
+    """Raised when variable validation fails."""
+    
+    def __init__(self, variable_name: str, message: str):
+        self.variable_name = variable_name
+        msg = f"Validation error for variable '{variable_name}': {message}"
+        super().__init__(msg)
+
+
+class VariableTypeError(VariableError):
+    """Raised when a variable has an incorrect type."""
+    
+    def __init__(self, variable_name: str, expected_type: str, actual_type: str):
+        self.variable_name = variable_name
+        self.expected_type = expected_type
+        self.actual_type = actual_type
+        msg = f"Type error for variable '{variable_name}': expected {expected_type}, got {actual_type}"
+        super().__init__(msg)
+
+
+class LibraryError(BoilerplatesError):
+    """Raised when library operations fail."""
+    pass
+
+
+class ModuleError(BoilerplatesError):
+    """Raised when module operations fail."""
+    pass
+
+
+class ModuleNotFoundError(ModuleError):
+    """Raised when a module cannot be found."""
+    
+    def __init__(self, module_name: str):
+        self.module_name = module_name
+        msg = f"Module '{module_name}' not found"
+        super().__init__(msg)
+
+
+class ModuleLoadError(ModuleError):
+    """Raised when a module fails to load."""
+    pass
+
+
+class FileOperationError(BoilerplatesError):
+    """Raised when file operations fail."""
+    pass
+
+
+class RenderError(BoilerplatesError):
+    """Raised when rendering operations fail."""
+    pass
+
+
+class YAMLParseError(BoilerplatesError):
+    """Raised when YAML parsing fails."""
+    
+    def __init__(self, file_path: str, original_error: Exception):
+        self.file_path = file_path
+        self.original_error = original_error
+        msg = f"Failed to parse YAML file '{file_path}': {original_error}"
+        super().__init__(msg)

+ 28 - 45
cli/core/library.py

@@ -3,6 +3,9 @@ from __future__ import annotations
 from pathlib import Path
 from pathlib import Path
 import logging
 import logging
 from typing import Optional
 from typing import Optional
+import yaml
+
+from .exceptions import LibraryError, TemplateNotFoundError, YAMLParseError
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
@@ -23,38 +26,20 @@ class Library:
     self.priority = priority  # Higher priority = checked first
     self.priority = priority  # Higher priority = checked first
   
   
   def _is_template_draft(self, template_path: Path) -> bool:
   def _is_template_draft(self, template_path: Path) -> bool:
-    """Check if a template is marked as draft.
-    
-    Args:
-        template_path: Path to the template directory
-    
-    Returns:
-        True if the template is marked as draft, False otherwise
-    """
-    import yaml
-    
+    """Check if a template is marked as draft."""
     # Find the template file
     # Find the template file
-    template_file = None
-    if (template_path / "template.yaml").exists():
-      template_file = template_path / "template.yaml"
-    elif (template_path / "template.yml").exists():
-      template_file = template_path / "template.yml"
-    
-    if not template_file:
+    for filename in ("template.yaml", "template.yml"):
+      template_file = template_path / filename
+      if template_file.exists():
+        break
+    else:
       return False
       return False
     
     
     try:
     try:
       with open(template_file, "r", encoding="utf-8") as f:
       with open(template_file, "r", encoding="utf-8") as f:
-        documents = list(yaml.safe_load_all(f))
-        valid_docs = [doc for doc in documents if doc is not None]
-        
-        if not valid_docs:
-          return False
-        
-        template_data = valid_docs[0]
-        metadata = template_data.get("metadata", {})
-        return metadata.get("draft", False)
-    except Exception as e:
+        docs = [doc for doc in yaml.safe_load_all(f) if doc]
+        return docs[0].get("metadata", {}).get("draft", False) if docs else False
+    except (yaml.YAMLError, IOError, OSError) as e:
       logger.warning(f"Error checking draft status for {template_path}: {e}")
       logger.warning(f"Error checking draft status for {template_path}: {e}")
       return False
       return False
 
 
@@ -76,13 +61,13 @@ class Library:
     # Build the path to the specific template directory
     # Build the path to the specific template directory
     template_path = self.path / module_name / template_id
     template_path = self.path / module_name / template_id
     
     
-    # Check if the template directory and either template.yaml or template.yml exist
-    if not (template_path.is_dir() and ((template_path / "template.yaml").exists() or (template_path / "template.yml").exists())):
-      raise FileNotFoundError(f"Template '{template_id}' not found in module '{module_name}' in library '{self.name}'")
+    # Check if template directory exists with a template file
+    has_template = template_path.is_dir() and any(
+      (template_path / f).exists() for f in ("template.yaml", "template.yml")
+    )
     
     
-    # Check if template is marked as draft
-    if self._is_template_draft(template_path):
-      raise FileNotFoundError(f"Template '{template_id}' is marked as draft and cannot be used")
+    if not has_template or self._is_template_draft(template_path):
+      raise TemplateNotFoundError(template_id, module_name)
     
     
     logger.debug(f"Found template '{template_id}' at: {template_path}")
     logger.debug(f"Found template '{template_id}' at: {template_path}")
     return template_path, self.name
     return template_path, self.name
@@ -110,21 +95,19 @@ class Library:
     
     
     # Check if the module directory exists
     # Check if the module directory exists
     if not module_path.is_dir():
     if not module_path.is_dir():
-      raise FileNotFoundError(f"Module '{module_name}' not found in library '{self.name}'")
+      raise LibraryError(f"Module '{module_name}' not found in library '{self.name}'")
     
     
-    # Get all directories in the module path that contain a template.yaml or template.yml file
-    # and are not marked as draft
+    # Get non-draft templates
     template_dirs = []
     template_dirs = []
     try:
     try:
       for item in module_path.iterdir():
       for item in module_path.iterdir():
-        if item.is_dir() and ((item / "template.yaml").exists() or (item / "template.yml").exists()):
-          # Skip draft templates
-          if not self._is_template_draft(item):
-            template_dirs.append((item, self.name))
-          else:
-            logger.debug(f"Skipping draft template: {item.name}")
+        has_template = item.is_dir() and any((item / f).exists() for f in ("template.yaml", "template.yml"))
+        if has_template and not self._is_template_draft(item):
+          template_dirs.append((item, self.name))
+        elif has_template:
+          logger.debug(f"Skipping draft template: {item.name}")
     except PermissionError as e:
     except PermissionError as e:
-      raise FileNotFoundError(f"Permission denied accessing module '{module_name}' in library '{self.name}': {e}")
+      raise LibraryError(f"Permission denied accessing module '{module_name}' in library '{self.name}': {e}")
     
     
     # Sort if requested
     # Sort if requested
     if sort_results:
     if sort_results:
@@ -163,7 +146,7 @@ class LibraryManager:
         template_path, lib_name = library.find_by_id(module_name, template_id)
         template_path, lib_name = library.find_by_id(module_name, template_id)
         logger.debug(f"Found template '{template_id}' in library '{library.name}'")
         logger.debug(f"Found template '{template_id}' in library '{library.name}'")
         return template_path, lib_name
         return template_path, lib_name
-      except FileNotFoundError:
+      except TemplateNotFoundError:
         # Continue searching in next library
         # Continue searching in next library
         continue
         continue
     
     
@@ -189,7 +172,7 @@ class LibraryManager:
         templates = library.find(module_name, sort_results=False)  # Sort at the end
         templates = library.find(module_name, sort_results=False)  # Sort at the end
         all_templates.extend(templates)
         all_templates.extend(templates)
         logger.debug(f"Found {len(templates)} templates in library '{library.name}'")
         logger.debug(f"Found {len(templates)} templates in library '{library.name}'")
-      except FileNotFoundError:
+      except LibraryError:
         # Module not found in this library, continue with next
         # Module not found in this library, continue with next
         logger.debug(f"Module '{module_name}' not found in library '{library.name}'")
         logger.debug(f"Module '{module_name}' not found in library '{library.name}'")
         continue
         continue

+ 356 - 133
cli/core/module.py

@@ -4,7 +4,7 @@ import logging
 import sys
 import sys
 from abc import ABC
 from abc import ABC
 from pathlib import Path
 from pathlib import Path
-from typing import Any, Optional
+from typing import Any, Optional, List, Dict, Tuple
 
 
 from rich.console import Console
 from rich.console import Console
 from rich.panel import Panel
 from rich.panel import Panel
@@ -21,7 +21,7 @@ console = Console()
 console_err = Console(stderr=True)
 console_err = Console(stderr=True)
 
 
 
 
-def parse_var_inputs(var_options: list[str], extra_args: list[str]) -> dict[str, Any]:
+def parse_var_inputs(var_options: List[str], extra_args: List[str]) -> Dict[str, Any]:
   """Parse variable inputs from --var options and extra args.
   """Parse variable inputs from --var options and extra args.
   
   
   Supports formats:
   Supports formats:
@@ -151,8 +151,7 @@ class Module(ABC):
     template = self._load_template_by_id(id)
     template = self._load_template_by_id(id)
 
 
     if not template:
     if not template:
-      logger.warning(f"Template '{id}' not found in module '{self.name}'")
-      console.print(f"[red]Template '{id}' not found in module '{self.name}'[/red]")
+      self.display.display_error(f"Template '{id}' not found", context=f"module '{self.name}'")
       return
       return
     
     
     # Apply config defaults (same as in generate)
     # Apply config defaults (same as in generate)
@@ -174,6 +173,284 @@ class Module(ABC):
     
     
     self._display_template_details(template, id)
     self._display_template_details(template, id)
 
 
+  def _apply_variable_defaults(self, template: Template) -> None:
+    """Apply config defaults and CLI overrides to template variables.
+    
+    Args:
+        template: Template instance with variables to configure
+    """
+    if not template.variables:
+      return
+    
+    from .config import ConfigManager
+    config = ConfigManager()
+    config_defaults = config.get_defaults(self.name)
+    
+    if config_defaults:
+      logger.info(f"Loading config defaults for module '{self.name}'")
+      successful = template.variables.apply_defaults(config_defaults, "config")
+      if successful:
+        logger.debug(f"Applied config defaults for: {', '.join(successful)}")
+
+  def _apply_cli_overrides(self, template: Template, var: Optional[List[str]], ctx: Context) -> None:
+    """Apply CLI variable overrides to template.
+    
+    Args:
+        template: Template instance to apply overrides to
+        var: List of variable override strings from --var flags
+        ctx: Typer context containing extra args
+    """
+    if not template.variables:
+      return
+    
+    extra_args = list(ctx.args) if ctx and hasattr(ctx, "args") else []
+    cli_overrides = parse_var_inputs(var or [], extra_args)
+    
+    if cli_overrides:
+      logger.info(f"Received {len(cli_overrides)} variable overrides from CLI")
+      successful_overrides = template.variables.apply_defaults(cli_overrides, "cli")
+      if successful_overrides:
+        logger.debug(f"Applied CLI overrides for: {', '.join(successful_overrides)}")
+
+  def _collect_variable_values(self, template: Template, interactive: bool) -> Dict[str, Any]:
+    """Collect variable values from user prompts and template defaults.
+    
+    Args:
+        template: Template instance with variables
+        interactive: Whether to prompt user for values interactively
+        
+    Returns:
+        Dictionary of variable names to values
+    """
+    variable_values = {}
+    
+    # Collect values interactively if enabled
+    if interactive and template.variables:
+      prompt_handler = PromptHandler()
+      collected_values = prompt_handler.collect_variables(template.variables)
+      if collected_values:
+        variable_values.update(collected_values)
+        logger.info(f"Collected {len(collected_values)} variable values from user input")
+    
+    # Add satisfied variable values (respects dependencies and toggles)
+    if template.variables:
+      variable_values.update(template.variables.get_satisfied_values())
+    
+    return variable_values
+  def _check_output_directory(self, output_dir: Path, rendered_files: Dict[str, str], 
+                              interactive: bool) -> Optional[List[Path]]:
+    """Check output directory for conflicts and get user confirmation if needed.
+    
+    Args:
+        output_dir: Directory where files will be written
+        rendered_files: Dictionary of file paths to rendered content
+        interactive: Whether to prompt user for confirmation
+        
+    Returns:
+        List of existing files that will be overwritten, or None to cancel
+    """
+    dir_exists = output_dir.exists()
+    dir_not_empty = dir_exists and any(output_dir.iterdir())
+    
+    # Check which files already exist
+    existing_files = []
+    if dir_exists:
+      for file_path in rendered_files.keys():
+        full_path = output_dir / file_path
+        if full_path.exists():
+          existing_files.append(full_path)
+    
+    # Warn if directory is not empty
+    if dir_not_empty:
+      if interactive:
+        console.print(f"\n[yellow]{IconManager.get_status_icon('warning')} Warning: Directory '{output_dir}' is not empty.[/yellow]")
+        if existing_files:
+          console.print(f"[yellow]  {len(existing_files)} file(s) will be overwritten.[/yellow]")
+        
+        if not Confirm.ask(f"Continue and potentially overwrite files in '{output_dir}'?", default=False):
+          console.print("[yellow]Generation cancelled.[/yellow]")
+          return None
+      else:
+        # Non-interactive mode: show warning but continue
+        logger.warning(f"Directory '{output_dir}' is not empty")
+        if existing_files:
+          logger.warning(f"{len(existing_files)} file(s) will be overwritten")
+    
+    return existing_files
+
+  def _get_generation_confirmation(self, output_dir: Path, rendered_files: Dict[str, str], 
+                                    existing_files: Optional[List[Path]], dir_not_empty: bool, 
+                                    dry_run: bool, interactive: bool) -> bool:
+    """Display file generation confirmation and get user approval.
+    
+    Args:
+        output_dir: Output directory path
+        rendered_files: Dictionary of file paths to content
+        existing_files: List of existing files that will be overwritten
+        dir_not_empty: Whether output directory already contains files
+        dry_run: Whether this is a dry run
+        interactive: Whether to prompt for confirmation
+        
+    Returns:
+        True if user confirms generation, False to cancel
+    """
+    if not interactive:
+      return True
+    
+    self.display.display_file_generation_confirmation(
+      output_dir, 
+      rendered_files, 
+      existing_files if existing_files else None
+    )
+    
+    # Final confirmation (only if we didn't already ask about overwriting)
+    if not dir_not_empty and not dry_run:
+      if not Confirm.ask("Generate these files?", default=True):
+        console.print("[yellow]Generation cancelled.[/yellow]")
+        return False
+    
+    return True
+
+  def _execute_dry_run(self, id: str, output_dir: Path, rendered_files: Dict[str, str], show_files: bool) -> None:
+    """Execute dry run mode with comprehensive simulation.
+    
+    Simulates all filesystem operations that would occur during actual generation,
+    including directory creation, file writing, and permission checks.
+    
+    Args:
+        id: Template ID
+        output_dir: Directory where files would be written
+        rendered_files: Dictionary of file paths to rendered content
+        show_files: Whether to display file contents
+    """
+    import os
+    from rich.table import Table
+    
+    console.print()
+    console.print("[bold blue]Dry Run Mode - Simulating File Generation[/bold blue]")
+    console.print()
+    
+    # Simulate directory creation
+    console.print("[cyan]📁 Directory Operations:[/cyan]")
+    
+    # Check if output directory exists
+    if output_dir.exists():
+      console.print(f"  [dim]✓[/dim] Output directory exists: {output_dir}")
+      # Check if we have write permissions
+      if os.access(output_dir, os.W_OK):
+        console.print(f"  [dim]✓[/dim] Write permission verified")
+      else:
+        console.print(f"  [yellow]⚠[/yellow]  Write permission may be denied")
+    else:
+      console.print(f"  [dim]→[/dim] Would create output directory: {output_dir}")
+      # Check if parent directory exists and is writable
+      parent = output_dir.parent
+      if parent.exists() and os.access(parent, os.W_OK):
+        console.print(f"  [dim]✓[/dim] Parent directory writable")
+      else:
+        console.print(f"  [yellow]⚠[/yellow]  Parent directory may not be writable")
+    
+    # Collect unique subdirectories that would be created
+    subdirs = set()
+    for file_path in rendered_files.keys():
+      parts = Path(file_path).parts
+      for i in range(1, len(parts)):
+        subdirs.add(Path(*parts[:i]))
+    
+    if subdirs:
+      console.print(f"  [dim]→[/dim] Would create {len(subdirs)} subdirectory(ies)")
+      for subdir in sorted(subdirs):
+        console.print(f"    [dim]•[/dim] {subdir}/")
+    
+    console.print()
+    
+    # Display file operations in a table
+    console.print("[cyan]📄 File Operations:[/cyan]")
+    
+    table = Table(show_header=True, header_style="bold cyan", box=None, padding=(0, 1))
+    table.add_column("File", style="white", no_wrap=False)
+    table.add_column("Size", justify="right", style="dim")
+    table.add_column("Status", style="yellow")
+    
+    total_size = 0
+    new_files = 0
+    overwrite_files = 0
+    
+    for file_path, content in sorted(rendered_files.items()):
+      full_path = output_dir / file_path
+      file_size = len(content.encode('utf-8'))
+      total_size += file_size
+      
+      # Determine status
+      if full_path.exists():
+        status = "Overwrite"
+        overwrite_files += 1
+      else:
+        status = "Create"
+        new_files += 1
+      
+      # Format size
+      if file_size < 1024:
+        size_str = f"{file_size}B"
+      elif file_size < 1024 * 1024:
+        size_str = f"{file_size / 1024:.1f}KB"
+      else:
+        size_str = f"{file_size / (1024 * 1024):.1f}MB"
+      
+      table.add_row(str(file_path), size_str, status)
+    
+    console.print(table)
+    console.print()
+    
+    # Summary statistics
+    console.print("[cyan]📊 Summary:[/cyan]")
+    console.print(f"  Total files: {len(rendered_files)}")
+    console.print(f"  New files: {new_files}")
+    console.print(f"  Files to overwrite: {overwrite_files}")
+    
+    if total_size < 1024:
+      size_str = f"{total_size}B"
+    elif total_size < 1024 * 1024:
+      size_str = f"{total_size / 1024:.1f}KB"
+    else:
+      size_str = f"{total_size / (1024 * 1024):.1f}MB"
+    console.print(f"  Total size: {size_str}")
+    console.print()
+    
+    # Show file contents if requested
+    if show_files:
+      console.print("[bold blue]Generated File Contents:[/bold blue]")
+      console.print()
+      for file_path, content in sorted(rendered_files.items()):
+        console.print(f"[cyan]File:[/cyan] {file_path}")
+        print(f"{'─'*80}")
+        print(content)
+        print()  # Add blank line after content
+      console.print()
+    
+    console.print(f"[yellow]{IconManager.get_status_icon('success')} Dry run complete - no files were written[/yellow]")
+    console.print(f"[dim]Files would have been generated in '{output_dir}'[/dim]")
+    logger.info(f"Dry run completed for template '{id}' - {len(rendered_files)} files, {total_size} bytes")
+
+  def _write_generated_files(self, output_dir: Path, rendered_files: Dict[str, str]) -> None:
+    """Write rendered files to the output directory.
+    
+    Args:
+        output_dir: Directory to write files to
+        rendered_files: Dictionary of file paths to rendered content
+    """
+    output_dir.mkdir(parents=True, exist_ok=True)
+    
+    for file_path, content in rendered_files.items():
+      full_path = output_dir / file_path
+      full_path.parent.mkdir(parents=True, exist_ok=True)
+      with open(full_path, 'w', encoding='utf-8') as f:
+        f.write(content)
+      console.print(f"[green]Generated file: {file_path}[/green]")
+    
+    console.print(f"\n[green]{IconManager.get_status_icon('success')} Template generated successfully in '{output_dir}'[/green]")
+    logger.info(f"Template written to directory: {output_dir}")
+
   def generate(
   def generate(
     self,
     self,
     id: str = Argument(..., help="Template ID"),
     id: str = Argument(..., help="Template ID"),
@@ -208,33 +485,12 @@ class Module(ABC):
         # Preview and show generated file contents
         # Preview and show generated file contents
         cli compose generate traefik --dry-run --show-files
         cli compose generate traefik --dry-run --show-files
     """
     """
-
     logger.info(f"Starting generation for template '{id}' from module '{self.name}'")
     logger.info(f"Starting generation for template '{id}' from module '{self.name}'")
     template = self._load_template_by_id(id)
     template = self._load_template_by_id(id)
 
 
-    # Apply config defaults (precedence: config > template > module)
-    # Config only sets VALUES, not the spec structure
-    if template.variables:
-      from .config import ConfigManager
-      config = ConfigManager()
-      config_defaults = config.get_defaults(self.name)
-      
-      if config_defaults:
-        logger.info(f"Loading config defaults for module '{self.name}'")
-        # Apply config defaults (this respects the variable types and validation)
-        successful = template.variables.apply_defaults(config_defaults, "config")
-        if successful:
-          logger.debug(f"Applied config defaults for: {', '.join(successful)}")
-    
-    # Apply CLI overrides (highest precedence)
-    extra_args = list(ctx.args) if ctx and hasattr(ctx, "args") else []
-    cli_overrides = parse_var_inputs(var or [], extra_args)
-    if cli_overrides:
-      logger.info(f"Received {len(cli_overrides)} variable overrides from CLI")
-      if template.variables:
-        successful_overrides = template.variables.apply_defaults(cli_overrides, "cli")
-        if successful_overrides:
-          logger.debug(f"Applied CLI overrides for: {', '.join(successful_overrides)}")
+    # Apply defaults and overrides
+    self._apply_variable_defaults(template)
+    self._apply_cli_overrides(template, var, ctx)
     
     
     # Re-sort sections after all overrides (toggle values may have changed)
     # Re-sort sections after all overrides (toggle values may have changed)
     if template.variables:
     if template.variables:
@@ -243,116 +499,48 @@ class Module(ABC):
     self._display_template_details(template, id)
     self._display_template_details(template, id)
     console.print()
     console.print()
 
 
-    variable_values = {}
-    if interactive and template.variables:
-      prompt_handler = PromptHandler()
-      collected_values = prompt_handler.collect_variables(template.variables)
-      if collected_values:
-        variable_values.update(collected_values)
-        logger.info(f"Collected {len(collected_values)} variable values from user input")
-
-    if template.variables:
-      # Use get_satisfied_values() to exclude variables from sections with unsatisfied dependencies
-      variable_values.update(template.variables.get_satisfied_values())
+    # Collect variable values
+    variable_values = self._collect_variable_values(template, interactive)
 
 
     try:
     try:
-      # Validate all variables before rendering
+      # Validate and render template
       if template.variables:
       if template.variables:
         template.variables.validate_all()
         template.variables.validate_all()
       
       
       rendered_files, variable_values = template.render(template.variables)
       rendered_files, variable_values = template.render(template.variables)
       
       
-      # Safety check for render result
       if not rendered_files:
       if not rendered_files:
-        console_err.print("[red]Error: Template rendering returned no files[/red]")
+        self.display.display_error("Template rendering returned no files", context="template generation")
         raise Exit(code=1)
         raise Exit(code=1)
       
       
       logger.info(f"Successfully rendered template '{id}'")
       logger.info(f"Successfully rendered template '{id}'")
       
       
-      # Determine output directory (default to template ID)
+      # Determine output directory
       output_dir = Path(directory) if directory else Path(id)
       output_dir = Path(directory) if directory else Path(id)
       
       
-      # Check if directory exists and is not empty
-      dir_exists = output_dir.exists()
-      dir_not_empty = dir_exists and any(output_dir.iterdir())
+      # Check for conflicts and get confirmation
+      existing_files = self._check_output_directory(output_dir, rendered_files, interactive)
+      if existing_files is None:
+        return  # User cancelled
       
       
-      # Check which files already exist
-      existing_files = []
-      if dir_exists:
-        for file_path in rendered_files.keys():
-          full_path = output_dir / file_path
-          if full_path.exists():
-            existing_files.append(full_path)
+      # Get final confirmation for generation
+      dir_not_empty = output_dir.exists() and any(output_dir.iterdir())
+      if not self._get_generation_confirmation(output_dir, rendered_files, existing_files, 
+                                               dir_not_empty, dry_run, interactive):
+        return  # User cancelled
       
       
-      # Warn if directory is not empty (both interactive and non-interactive)
-      if dir_not_empty:
-        if interactive:
-          console.print(f"\n[yellow]{IconManager.get_status_icon('warning')} Warning: Directory '{output_dir}' is not empty.[/yellow]")
-          if existing_files:
-            console.print(f"[yellow]  {len(existing_files)} file(s) will be overwritten.[/yellow]")
-          
-          if not Confirm.ask(f"Continue and potentially overwrite files in '{output_dir}'?", default=False):
-            console.print("[yellow]Generation cancelled.[/yellow]")
-            return
-        else:
-          # Non-interactive mode: show warning but continue
-          logger.warning(f"Directory '{output_dir}' is not empty")
-          if existing_files:
-            logger.warning(f"{len(existing_files)} file(s) will be overwritten")
-      
-      # Display file generation confirmation in interactive mode
-      if interactive:
-        self.display.display_file_generation_confirmation(
-          output_dir, 
-          rendered_files, 
-          existing_files if existing_files else None
-        )
-        
-        # Final confirmation (only if we didn't already ask about overwriting)
-        if not dir_not_empty and not dry_run:
-          if not Confirm.ask("Generate these files?", default=True):
-            console.print("[yellow]Generation cancelled.[/yellow]")
-            return
-      
-      # Skip file writing in dry-run mode
+      # Execute generation (dry run or actual)
       if dry_run:
       if dry_run:
-        # Display file contents if requested
-        if show_files:
-          console.print()
-          console.print("[bold blue]Generated Files:[/bold blue]")
-          console.print()
-          for file_path, content in sorted(rendered_files.items()):
-            console.print(f"[cyan]File:[/cyan] {file_path}")
-            print(f"{'─'*80}")
-            print(content)
-            print()  # Add blank line after content
-        
-        console.print(f"[yellow]{IconManager.get_status_icon('success')} Dry run complete - no files were written[/yellow]")
-        console.print(f"[dim]Files would have been generated in '{output_dir}'[/dim]")
-        logger.info(f"Dry run completed for template '{id}'")
+        self._execute_dry_run(id, output_dir, rendered_files, show_files)
       else:
       else:
-        # Create the output directory if it doesn't exist
-        output_dir.mkdir(parents=True, exist_ok=True)
-
-        # Write rendered files to the output directory
-        for file_path, content in rendered_files.items():
-          full_path = output_dir / file_path
-          full_path.parent.mkdir(parents=True, exist_ok=True)
-          with open(full_path, 'w', encoding='utf-8') as f:
-            f.write(content)
-          console.print(f"[green]Generated file: {file_path}[/green]")
-        
-        console.print(f"\n[green]{IconManager.get_status_icon('success')} Template generated successfully in '{output_dir}'[/green]")
-        logger.info(f"Template written to directory: {output_dir}")
+        self._write_generated_files(output_dir, rendered_files)
       
       
-      # Display next steps if provided in template metadata
+      # Display next steps
       if template.metadata.next_steps:
       if template.metadata.next_steps:
         self.display.display_next_steps(template.metadata.next_steps, variable_values)
         self.display.display_next_steps(template.metadata.next_steps, variable_values)
 
 
     except Exception as e:
     except Exception as e:
-      logger.error(f"Error rendering template '{id}': {e}")
-      console_err.print(f"[red]Error generating template: {e}[/red]")
-      # Stop execution without letting Typer/Click print the exception again.
+      self.display.display_error(str(e), context=f"generating template '{id}'")
       raise Exit(code=1)
       raise Exit(code=1)
 
 
   def config_get(
   def config_get(
@@ -377,7 +565,7 @@ class Module(ABC):
       if value is not None:
       if value is not None:
         console.print(f"[green]{var_name}[/green] = [yellow]{value}[/yellow]")
         console.print(f"[green]{var_name}[/green] = [yellow]{value}[/yellow]")
       else:
       else:
-        console.print(f"[red]No default set for variable '{var_name}' in module '{self.name}'[/red]")
+        self.display.display_warning(f"No default set for variable '{var_name}'", context=f"module '{self.name}'")
     else:
     else:
       # Show all defaults (flat list)
       # Show all defaults (flat list)
       defaults = config.get_defaults(self.name)
       defaults = config.get_defaults(self.name)
@@ -426,8 +614,8 @@ class Module(ABC):
       actual_var_name = var_name
       actual_var_name = var_name
       actual_value = value
       actual_value = value
     else:
     else:
-      console_err.print(f"[red]Error: Missing value for variable '{var_name}'[/red]")
-      console_err.print(f"[dim]Usage: defaults set VAR_NAME VALUE or defaults set VAR_NAME=VALUE[/dim]")
+      self.display.display_error(f"Missing value for variable '{var_name}'", context="config set")
+      console.print(f"[dim]Usage: defaults set VAR_NAME VALUE or defaults set VAR_NAME=VALUE[/dim]")
       raise Exit(code=1)
       raise Exit(code=1)
     
     
     # Set the default value
     # Set the default value
@@ -543,9 +731,18 @@ class Module(ABC):
   def validate(
   def validate(
     self,
     self,
     template_id: str = Argument(None, help="Template ID to validate (if omitted, validates all templates)"),
     template_id: str = Argument(None, help="Template ID to validate (if omitted, validates all templates)"),
-    verbose: bool = Option(False, "--verbose", "-v", help="Show detailed validation information")
+    verbose: bool = Option(False, "--verbose", "-v", help="Show detailed validation information"),
+    semantic: bool = Option(True, "--semantic/--no-semantic", help="Enable semantic validation (Docker Compose schema, etc.)")
   ) -> None:
   ) -> None:
-    """Validate templates for Jinja2 syntax errors and undefined variables.
+    """Validate templates for Jinja2 syntax, undefined variables, and semantic correctness.
+    
+    Validation includes:
+    - Jinja2 syntax checking
+    - Variable definition checking
+    - Semantic validation (when --semantic is enabled):
+      - Docker Compose file structure
+      - YAML syntax
+      - Configuration best practices
     
     
     Examples:
     Examples:
         # Validate all templates in this module
         # Validate all templates in this module
@@ -556,8 +753,12 @@ class Module(ABC):
         
         
         # Validate with verbose output
         # Validate with verbose output
         cli compose validate --verbose
         cli compose validate --verbose
+        
+        # Skip semantic validation (only Jinja2)
+        cli compose validate --no-semantic
     """
     """
     from rich.table import Table
     from rich.table import Table
+    from .validators import get_validator_registry
     
     
     if template_id:
     if template_id:
       # Validate a specific template
       # Validate a specific template
@@ -570,11 +771,37 @@ class Module(ABC):
           _ = template.used_variables
           _ = template.used_variables
           # Trigger variable definition validation by accessing variables
           # Trigger variable definition validation by accessing variables
           _ = template.variables
           _ = template.variables
-          console.print(f"[green]{IconManager.get_status_icon('success')} Template '{template_id}' is valid[/green]")
+          console.print(f"[green]{IconManager.get_status_icon('success')} Jinja2 validation passed[/green]")
+          
+          # Semantic validation
+          if semantic:
+            console.print(f"\n[bold cyan]Running semantic validation...[/bold cyan]")
+            registry = get_validator_registry()
+            has_semantic_errors = False
+            
+            # Render template with default values for validation
+            rendered_files, _ = template.render(template.variables)
+            
+            for file_path, content in rendered_files.items():
+              result = registry.validate_file(content, file_path)
+              
+              if result.errors or result.warnings or (verbose and result.info):
+                console.print(f"\n[cyan]File:[/cyan] {file_path}")
+                result.display(f"{file_path}")
+                
+                if result.errors:
+                  has_semantic_errors = True
+            
+            if not has_semantic_errors:
+              console.print(f"\n[green]{IconManager.get_status_icon('success')} Semantic validation passed[/green]")
+            else:
+              console.print(f"\n[red]{IconManager.get_status_icon('error')} Semantic validation found errors[/red]")
+              raise Exit(code=1)
           
           
           if verbose:
           if verbose:
             console.print(f"\n[dim]Template path: {template.template_dir}[/dim]")
             console.print(f"\n[dim]Template path: {template.template_dir}[/dim]")
             console.print(f"[dim]Found {len(template.used_variables)} variables[/dim]")
             console.print(f"[dim]Found {len(template.used_variables)} variables[/dim]")
+            console.print(f"[dim]Generated {len(rendered_files)} files[/dim]")
         except ValueError as e:
         except ValueError as e:
           console.print(f"[red]{IconManager.get_status_icon('error')} Validation failed for '{template_id}':[/red]")
           console.print(f"[red]{IconManager.get_status_icon('error')} Validation failed for '{template_id}':[/red]")
           console.print(f"\n{e}")
           console.print(f"\n{e}")
@@ -665,22 +892,18 @@ class Module(ABC):
     app.add_typer(module_app, name=cls.name, help=cls.description)
     app.add_typer(module_app, name=cls.name, help=cls.description)
     logger.info(f"Module '{cls.name}' CLI commands registered")
     logger.info(f"Module '{cls.name}' CLI commands registered")
 
 
-  def _load_template_by_id(self, template_id: str) -> Template:
-    result = self.libraries.find_by_id(self.name, template_id)
+  def _load_template_by_id(self, id: str) -> Template:
+    result = self.libraries.find_by_id(self.name, id)
     if not result:
     if not result:
-      logger.debug(f"Template '{template_id}' not found in module '{self.name}'")
-      raise FileNotFoundError(f"Template '{template_id}' not found in module '{self.name}'")
-
-    template_dir, library_name = result
+      raise FileNotFoundError(f"Template '{id}' not found in module '{self.name}'")
     
     
+    template_dir, library_name = result
     try:
     try:
       return Template(template_dir, library_name=library_name)
       return Template(template_dir, library_name=library_name)
-    except (ValueError, FileNotFoundError) as exc:
-      raise FileNotFoundError(f"Template '{template_id}' validation failed in module '{self.name}'") from exc
     except Exception as exc:
     except Exception as exc:
-      logger.error(f"Failed to load template from {template_dir}: {exc}")
-      raise FileNotFoundError(f"Template '{template_id}' could not be loaded in module '{self.name}'") from exc
+      logger.error(f"Failed to load template '{id}': {exc}")
+      raise FileNotFoundError(f"Template '{id}' could not be loaded: {exc}") from exc
 
 
-  def _display_template_details(self, template: Template, template_id: str) -> None:
+  def _display_template_details(self, template: Template, id: str) -> None:
     """Display template information panel and variables table."""
     """Display template information panel and variables table."""
-    self.display.display_template_details(template, template_id)
+    self.display.display_template_details(template, id)

+ 21 - 28
cli/core/prompt.py

@@ -7,7 +7,8 @@ from rich.prompt import Prompt, Confirm, IntPrompt
 from rich.table import Table
 from rich.table import Table
 
 
 from .display import DisplayManager, IconManager
 from .display import DisplayManager, IconManager
-from .variables import Variable, VariableCollection
+from .variable import Variable
+from .collection import VariableCollection
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
@@ -70,7 +71,7 @@ class PromptHandler:
         if toggle_var:
         if toggle_var:
           # Use description for prompt if available, otherwise use title
           # Use description for prompt if available, otherwise use title
           prompt_text = section.description if section.description else f"Enable {section.title}?"
           prompt_text = section.description if section.description else f"Enable {section.title}?"
-          current_value = toggle_var.get_typed_value()
+          current_value = toggle_var.convert(toggle_var.value)
           new_value = self._prompt_bool(prompt_text, current_value)
           new_value = self._prompt_bool(prompt_text, current_value)
           
           
           if new_value != current_value:
           if new_value != current_value:
@@ -87,7 +88,7 @@ class PromptHandler:
         if section.toggle and var_name == section.toggle:
         if section.toggle and var_name == section.toggle:
           continue
           continue
           
           
-        current_value = variable.get_typed_value()
+        current_value = variable.convert(variable.value)
         # Pass section.required so _prompt_variable can enforce required inputs
         # Pass section.required so _prompt_variable can enforce required inputs
         new_value = self._prompt_variable(variable, required=section.required)
         new_value = self._prompt_variable(variable, required=section.required)
         
         
@@ -142,18 +143,12 @@ class PromptHandler:
     while True:
     while True:
       try:
       try:
         raw = handler(prompt_text, default_value)
         raw = handler(prompt_text, default_value)
-        # Convert/validate the user's input using the Variable conversion
-        converted = variable.convert(raw)
-
-        # Allow empty values for autogenerated variables
-        # Also treat the "*auto" marker as a signal for autogeneration
-        if variable.autogenerated and (converted is None or (isinstance(converted, str) and (converted == "" or converted == "*auto"))):
-          return None  # Return None to indicate auto-generation should happen
+        # Use Variable's centralized validation method that handles:
+        # - Type conversion
+        # - Autogenerated variable detection
+        # - Required field validation
+        converted = variable.validate_and_convert(raw, check_required=True)
         
         
-        # If this variable is required, do not accept None/empty values
-        if var_is_required and (converted is None or (isinstance(converted, str) and converted == "")):
-          raise ValueError("This field is required and cannot be empty")
-
         # Return the converted value (caller will update variable.value)
         # Return the converted value (caller will update variable.value)
         return converted
         return converted
       except ValueError as exc:
       except ValueError as exc:
@@ -180,32 +175,30 @@ class PromptHandler:
     """Display validation feedback consistently."""
     """Display validation feedback consistently."""
     self.display.display_validation_error(message)
     self.display.display_validation_error(message)
 
 
-  def _prompt_string(self, prompt_text: str, default: Any = None, is_sensitive: bool = False) -> str:
+  def _prompt_string(self, prompt_text: str, default: Any = None, is_sensitive: bool = False) -> str | None:
     value = Prompt.ask(
     value = Prompt.ask(
       prompt_text,
       prompt_text,
       default=str(default) if default is not None else "",
       default=str(default) if default is not None else "",
       show_default=True,
       show_default=True,
       password=is_sensitive
       password=is_sensitive
     )
     )
-    if value is None:
-      return None
-    stripped = value.strip()
-    return stripped if stripped != "" else None
+    stripped = value.strip() if value else None
+    return stripped if stripped else None
 
 
-  def _prompt_bool(self, prompt_text: str, default: Any = None) -> bool:
-    default_bool = None
-    if default is not None:
-      default_bool = default if isinstance(default, bool) else str(default).lower() in ("true", "1", "yes", "on")
-    return Confirm.ask(prompt_text, default=default_bool)
+  def _prompt_bool(self, prompt_text: str, default: Any = None) -> bool | None:
+    if default is None:
+      return Confirm.ask(prompt_text, default=None)
+    converted = default if isinstance(default, bool) else str(default).lower() in ("true", "1", "yes", "on")
+    return Confirm.ask(prompt_text, default=converted)
 
 
-  def _prompt_int(self, prompt_text: str, default: Any = None) -> int:
-    default_int = None
+  def _prompt_int(self, prompt_text: str, default: Any = None) -> int | None:
+    converted = None
     if default is not None:
     if default is not None:
       try:
       try:
-        default_int = int(default)
+        converted = int(default)
       except (ValueError, TypeError):
       except (ValueError, TypeError):
         logger.warning(f"Invalid default integer value: {default}")
         logger.warning(f"Invalid default integer value: {default}")
-    return IntPrompt.ask(prompt_text, default=default_int)
+    return IntPrompt.ask(prompt_text, default=converted)
 
 
   def _prompt_enum(self, prompt_text: str, options: list[str], default: Any = None, extra: str | None = None) -> str:
   def _prompt_enum(self, prompt_text: str, options: list[str], default: Any = None, extra: str | None = None) -> str:
     """Prompt for enum selection with validation.
     """Prompt for enum selection with validation.

+ 113 - 0
cli/core/section.py

@@ -0,0 +1,113 @@
+from __future__ import annotations
+
+from collections import OrderedDict
+from typing import Any, Dict, List, Optional
+
+from .variable import Variable
+
+
+class VariableSection:
+  """Groups variables together with shared metadata for presentation."""
+
+  def __init__(self, data: dict[str, Any]) -> None:
+    """Initialize VariableSection from a dictionary.
+    
+    Args:
+        data: Dictionary containing section specification with required 'key' and 'title' keys
+    """
+    if not isinstance(data, dict):
+      raise ValueError("VariableSection data must be a dictionary")
+    
+    if "key" not in data:
+      raise ValueError("VariableSection data must contain 'key'")
+    
+    if "title" not in data:
+      raise ValueError("VariableSection data must contain 'title'")
+    
+    self.key: str = data["key"]
+    self.title: str = data["title"]
+    self.variables: OrderedDict[str, Variable] = OrderedDict()
+    self.description: Optional[str] = data.get("description")
+    self.toggle: Optional[str] = data.get("toggle")
+    # Default "general" section to required=True, all others to required=False
+    self.required: bool = data.get("required", data["key"] == "general")
+    # Section dependencies - can be string or list of strings
+    needs_value = data.get("needs")
+    if needs_value:
+      if isinstance(needs_value, str):
+        self.needs: List[str] = [needs_value]
+      elif isinstance(needs_value, list):
+        self.needs: List[str] = needs_value
+      else:
+        raise ValueError(f"Section '{self.key}' has invalid 'needs' value: must be string or list")
+    else:
+      self.needs: List[str] = []
+
+  def to_dict(self) -> Dict[str, Any]:
+    """Serialize VariableSection to a dictionary for storage."""
+    section_dict = {
+      'required': self.required,
+      'vars': {name: var.to_dict() for name, var in self.variables.items()}
+    }
+    
+    # Add optional fields if present
+    for field in ('title', 'description', 'toggle'):
+      if value := getattr(self, field):
+        section_dict[field] = value
+    
+    # Store dependencies (single value if only one, list otherwise)
+    if self.needs:
+      section_dict['needs'] = self.needs[0] if len(self.needs) == 1 else self.needs
+    
+    return section_dict
+  
+  def is_enabled(self) -> bool:
+    """Check if section is currently enabled based on toggle variable.
+    
+    Returns:
+        True if section is enabled (no toggle or toggle is True), False otherwise
+    """
+    if not self.toggle:
+      return True
+    
+    toggle_var = self.variables.get(self.toggle)
+    if not toggle_var:
+      return True
+    
+    try:
+      return bool(toggle_var.convert(toggle_var.value))
+    except Exception:
+      return False
+  
+  def clone(self, origin_update: Optional[str] = None) -> 'VariableSection':
+    """Create a deep copy of the section with all variables.
+    
+    This is more efficient than converting to dict and back when copying sections.
+    
+    Args:
+        origin_update: Optional origin string to apply to all cloned variables
+        
+    Returns:
+        New VariableSection instance with deep-copied variables
+        
+    Example:
+        section2 = section1.clone(origin_update='template')
+    """
+    # Create new section with same metadata
+    cloned = VariableSection({
+      'key': self.key,
+      'title': self.title,
+      'description': self.description,
+      'toggle': self.toggle,
+      'required': self.required,
+      'needs': self.needs.copy() if self.needs else None,
+    })
+    
+    # Deep copy all variables
+    for var_name, variable in self.variables.items():
+      if origin_update:
+        cloned.variables[var_name] = variable.clone(update={'origin': origin_update})
+      else:
+        cloned.variables[var_name] = variable.clone()
+    
+    return cloned

+ 56 - 88
cli/core/template.py

@@ -1,6 +1,16 @@
 from __future__ import annotations
 from __future__ import annotations
 
 
-from .variables import Variable, VariableCollection
+from .variable import Variable
+from .collection import VariableCollection
+from .exceptions import (
+    TemplateError,
+    TemplateLoadError,
+    TemplateSyntaxError,
+    TemplateValidationError,
+    TemplateRenderError,
+    YAMLParseError,
+    ModuleLoadError
+)
 from pathlib import Path
 from pathlib import Path
 from typing import Any, Dict, List, Set, Optional, Literal
 from typing import Any, Dict, List, Set, Optional, Literal
 from dataclasses import dataclass, field
 from dataclasses import dataclass, field
@@ -9,6 +19,7 @@ import logging
 import os
 import os
 import yaml
 import yaml
 from jinja2 import Environment, FileSystemLoader, meta
 from jinja2 import Environment, FileSystemLoader, meta
+from jinja2.sandbox import SandboxedEnvironment
 from jinja2 import nodes
 from jinja2 import nodes
 from jinja2.visitor import NodeVisitor
 from jinja2.visitor import NodeVisitor
 
 
@@ -32,7 +43,6 @@ class TemplateMetadata:
   version: str
   version: str
   module: str = ""
   module: str = ""
   tags: List[str] = field(default_factory=list)
   tags: List[str] = field(default_factory=list)
-  # files: List[str] = field(default_factory=list) # No longer needed, as TemplateFile handles this
   library: str = "unknown"
   library: str = "unknown"
   next_steps: str = ""
   next_steps: str = ""
   draft: bool = False
   draft: bool = False
@@ -64,7 +74,6 @@ class TemplateMetadata:
     self.version = metadata_section.get("version", "")
     self.version = metadata_section.get("version", "")
     self.module = metadata_section.get("module", "")
     self.module = metadata_section.get("module", "")
     self.tags = metadata_section.get("tags", []) or []
     self.tags = metadata_section.get("tags", []) or []
-    # self.files = metadata_section.get("files", []) or [] # No longer needed
     self.library = library_name or "unknown"
     self.library = library_name or "unknown"
     self.draft = metadata_section.get("draft", False)
     self.draft = metadata_section.get("draft", False)
     
     
@@ -152,10 +161,13 @@ class Template:
 
 
     except (ValueError, FileNotFoundError) as e:
     except (ValueError, FileNotFoundError) as e:
       logger.error(f"Error loading template from {template_dir}: {e}")
       logger.error(f"Error loading template from {template_dir}: {e}")
-      raise
-    except Exception as e:
-      logger.error(f"An unexpected error occurred while loading template {template_dir}: {e}")
-      raise
+      raise TemplateLoadError(f"Error loading template from {template_dir}: {e}")
+    except yaml.YAMLError as e:
+      logger.error(f"YAML parsing error in template {template_dir}: {e}")
+      raise YAMLParseError(str(template_dir / "template.y*ml"), e)
+    except (IOError, OSError) as e:
+      logger.error(f"File I/O error loading template {template_dir}: {e}")
+      raise TemplateLoadError(f"File I/O error loading template from {template_dir}: {e}")
 
 
   def _find_main_template_file(self) -> Path:
   def _find_main_template_file(self) -> Path:
     """Find the main template file (template.yaml or template.yml)."""
     """Find the main template file (template.yaml or template.yml)."""
@@ -262,20 +274,18 @@ class Template:
             content = f.read()
             content = f.read()
             ast = self.jinja_env.parse(content) # Use lazy-loaded jinja_env
             ast = self.jinja_env.parse(content) # Use lazy-loaded jinja_env
             used_variables.update(meta.find_undeclared_variables(ast))
             used_variables.update(meta.find_undeclared_variables(ast))
+        except (IOError, OSError) as e:
+          relative_path = file_path.relative_to(self.template_dir)
+          syntax_errors.append(f"  - {relative_path}: File I/O error: {e}")
         except Exception as e:
         except Exception as e:
-          # Collect syntax errors instead of just warning
+          # Collect syntax errors for Jinja2 issues
           relative_path = file_path.relative_to(self.template_dir)
           relative_path = file_path.relative_to(self.template_dir)
           syntax_errors.append(f"  - {relative_path}: {e}")
           syntax_errors.append(f"  - {relative_path}: {e}")
     
     
     # Raise error if any syntax errors were found
     # Raise error if any syntax errors were found
     if syntax_errors:
     if syntax_errors:
-      error_msg = (
-        f"Jinja2 syntax errors found in template '{self.id}':\n" +
-        "\n".join(syntax_errors) +
-        "\n\nPlease fix the syntax errors in the template files."
-      )
-      logger.error(error_msg)
-      raise ValueError(error_msg)
+      logger.error(f"Jinja2 syntax errors found in template '{self.id}'")
+      raise TemplateSyntaxError(self.id, syntax_errors)
     
     
     return used_variables
     return used_variables
 
 
@@ -321,8 +331,8 @@ class Template:
           content = f.read()
           content = f.read()
         ast = self.jinja_env.parse(content)
         ast = self.jinja_env.parse(content)
         visitor.visit(ast)
         visitor.visit(ast)
-      except Exception:
-        # skip failures - this extraction is best-effort only
+      except (IOError, OSError, yaml.YAMLError):
+        # Skip failures - this extraction is best-effort only
         continue
         continue
 
 
     return visitor.found
     return visitor.found
@@ -367,7 +377,7 @@ class Template:
         ValueError: If 'kind' field is missing
         ValueError: If 'kind' field is missing
     """
     """
     if not template_data.get("kind"):
     if not template_data.get("kind"):
-      raise ValueError("Template format error: missing 'kind' field")
+      raise TemplateValidationError("Template format error: missing 'kind' field")
 
 
   def _validate_variable_definitions(self, used_variables: set[str], merged_specs: dict[str, Any]) -> None:
   def _validate_variable_definitions(self, used_variables: set[str], merged_specs: dict[str, Any]) -> None:
     """Validate that all variables used in Jinja2 content are defined in the spec."""
     """Validate that all variables used in Jinja2 content are defined in the spec."""
@@ -397,16 +407,21 @@ class Template:
               f"        default: <your_default_value_here>\n"
               f"        default: <your_default_value_here>\n"
           )
           )
       logger.error(error_msg)
       logger.error(error_msg)
-      raise ValueError(error_msg)
+      raise TemplateValidationError(error_msg)
 
 
   @staticmethod
   @staticmethod
   def _create_jinja_env(searchpath: Path) -> Environment:
   def _create_jinja_env(searchpath: Path) -> Environment:
-    """Create standardized Jinja2 environment for consistent template processing.
+    """Create sandboxed Jinja2 environment for secure template processing.
     
     
-    Returns a simple Jinja2 environment without custom filters.
-    Variable autogeneration is handled by the render() method.
+    Uses SandboxedEnvironment to prevent code injection vulnerabilities
+    when processing untrusted templates. This restricts access to dangerous
+    operations while still allowing safe template rendering.
+    
+    Returns:
+        SandboxedEnvironment configured for template processing.
     """
     """
-    return Environment(
+    # NOTE Use SandboxedEnvironment for security - prevents arbitrary code execution
+    return SandboxedEnvironment(
       loader=FileSystemLoader(searchpath),
       loader=FileSystemLoader(searchpath),
       trim_blocks=True,
       trim_blocks=True,
       lstrip_blocks=True,
       lstrip_blocks=True,
@@ -446,7 +461,7 @@ class Template:
           rendered_files[str(template_file.output_path)] = rendered_content
           rendered_files[str(template_file.output_path)] = rendered_content
         except Exception as e:
         except Exception as e:
           logger.error(f"Error rendering template file {template_file.relative_path}: {e}")
           logger.error(f"Error rendering template file {template_file.relative_path}: {e}")
-          raise
+          raise TemplateRenderError(f"Error rendering {template_file.relative_path}: {e}")
       elif template_file.file_type == 'static':
       elif template_file.file_type == 'static':
           # For static files, just read their content and add to rendered_files
           # For static files, just read their content and add to rendered_files
           # This ensures static files are also part of the output dictionary
           # This ensures static files are also part of the output dictionary
@@ -455,78 +470,31 @@ class Template:
               with open(file_path, "r", encoding="utf-8") as f:
               with open(file_path, "r", encoding="utf-8") as f:
                   content = f.read()
                   content = f.read()
                   rendered_files[str(template_file.output_path)] = content
                   rendered_files[str(template_file.output_path)] = content
-          except Exception as e:
+          except (IOError, OSError) as e:
               logger.error(f"Error reading static file {file_path}: {e}")
               logger.error(f"Error reading static file {file_path}: {e}")
-              raise
+              raise TemplateRenderError(f"Error reading static file {file_path}: {e}")
           
           
     return rendered_files, variable_values
     return rendered_files, variable_values
   
   
   def _sanitize_content(self, content: str, file_path: Path) -> str:
   def _sanitize_content(self, content: str, file_path: Path) -> str:
-    """Sanitize rendered content by removing excessive blank lines.
-    
-    This function:
-    - Reduces multiple consecutive blank lines to a maximum of one blank line
-    - Preserves file structure and readability
-    - Removes trailing whitespace from lines
-    - Ensures file ends with a single newline
-    
-    Args:
-        content: The rendered content to sanitize
-        file_path: Path to the output file (used for file-type detection)
-        
-    Returns:
-        Sanitized content with cleaned up blank lines
-    """
+    """Sanitize rendered content by removing excessive blank lines and trailing whitespace."""
     if not content:
     if not content:
       return content
       return content
     
     
-    # Split content into lines
-    lines = content.split('\n')
-    sanitized_lines = []
-    blank_line_count = 0
+    lines = [line.rstrip() for line in content.split('\n')]
+    sanitized = []
+    prev_blank = False
     
     
     for line in lines:
     for line in lines:
-      # Remove trailing whitespace from the line
-      cleaned_line = line.rstrip()
-      
-      # Check if this is a blank line
-      if not cleaned_line:
-        blank_line_count += 1
-        # Only keep the first blank line in a sequence
-        if blank_line_count == 1:
-          sanitized_lines.append('')
-      else:
-        # Reset counter when we hit a non-blank line
-        blank_line_count = 0
-        sanitized_lines.append(cleaned_line)
-    
-    # Join lines back together
-    result = '\n'.join(sanitized_lines)
-    
-    # Remove leading blank lines
-    result = result.lstrip('\n')
-    
-    # Ensure file ends with exactly one newline
-    result = result.rstrip('\n') + '\n'
-    
-    return result
-
-  def mask_sensitive_values(self, rendered_files: Dict[str, str], variables: VariableCollection) -> Dict[str, str]:
-    """Mask sensitive values in rendered files using Variable's native masking."""
-    masked_files = {}
-    
-    # Get all variables (not just sensitive ones) to use their native get_display_value()
-    for file_path, content in rendered_files.items():
-      # Iterate through all sections and variables
-      for section in variables.get_sections().values():
-        for variable in section.variables.values():
-          if variable.sensitive and variable.value:
-            # Use variable's native masking - always returns "********" for sensitive vars
-            masked_value = variable.get_display_value(mask_sensitive=True)
-            content = content.replace(str(variable.value), masked_value)
-      masked_files[file_path] = content
-      
-    return masked_files
+      is_blank = not line
+      if is_blank and prev_blank:
+        continue  # Skip consecutive blank lines
+      sanitized.append(line)
+      prev_blank = is_blank
+    
+    # Remove leading blanks and ensure single trailing newline
+    return '\n'.join(sanitized).lstrip('\n').rstrip('\n') + '\n'
+
   
   
   @property
   @property
   def template_files(self) -> List[TemplateFile]:
   def template_files(self) -> List[TemplateFile]:
@@ -584,8 +552,8 @@ class Template:
                 if 'default' not in var_data or var_data.get('default') in (None, ''):
                 if 'default' not in var_data or var_data.get('default') in (None, ''):
                   if var_name in jinja_defaults:
                   if var_name in jinja_defaults:
                     var_data['default'] = jinja_defaults[var_name]
                     var_data['default'] = jinja_defaults[var_name]
-          except Exception:
-            # keep behavior stable on any extraction errors
+          except (KeyError, TypeError, AttributeError):
+            # Keep behavior stable on any extraction errors
             pass
             pass
 
 
           self.__variables = VariableCollection(filtered_specs)
           self.__variables = VariableCollection(filtered_specs)

+ 297 - 0
cli/core/validators.py

@@ -0,0 +1,297 @@
+"""Semantic validators for template content.
+
+This module provides validators for specific file types and formats,
+enabling semantic validation beyond Jinja2 syntax checking.
+"""
+
+from __future__ import annotations
+
+import logging
+from abc import ABC, abstractmethod
+from pathlib import Path
+from typing import Any, Dict, List, Optional
+
+import yaml
+from rich.console import Console
+
+logger = logging.getLogger(__name__)
+console = Console()
+
+
+class ValidationResult:
+    """Represents the result of a validation operation."""
+    
+    def __init__(self):
+        self.errors: List[str] = []
+        self.warnings: List[str] = []
+        self.info: List[str] = []
+    
+    def add_error(self, message: str) -> None:
+        """Add an error message."""
+        self.errors.append(message)
+        logger.error(f"Validation error: {message}")
+    
+    def add_warning(self, message: str) -> None:
+        """Add a warning message."""
+        self.warnings.append(message)
+        logger.warning(f"Validation warning: {message}")
+    
+    def add_info(self, message: str) -> None:
+        """Add an info message."""
+        self.info.append(message)
+        logger.info(f"Validation info: {message}")
+    
+    @property
+    def is_valid(self) -> bool:
+        """Check if validation passed (no errors)."""
+        return len(self.errors) == 0
+    
+    @property
+    def has_warnings(self) -> bool:
+        """Check if validation has warnings."""
+        return len(self.warnings) > 0
+    
+    def display(self, context: str = "Validation") -> None:
+        """Display validation results to console."""
+        if self.errors:
+            console.print(f"\n[red]✗ {context} Failed:[/red]")
+            for error in self.errors:
+                console.print(f"  [red]• {error}[/red]")
+        
+        if self.warnings:
+            console.print(f"\n[yellow]⚠ {context} Warnings:[/yellow]")
+            for warning in self.warnings:
+                console.print(f"  [yellow]• {warning}[/yellow]")
+        
+        if self.info:
+            console.print(f"\n[blue]ℹ {context} Info:[/blue]")
+            for info_msg in self.info:
+                console.print(f"  [blue]• {info_msg}[/blue]")
+        
+        if self.is_valid and not self.has_warnings:
+            console.print(f"\n[green]✓ {context} Passed[/green]")
+
+
+class ContentValidator(ABC):
+    """Abstract base class for content validators."""
+    
+    @abstractmethod
+    def validate(self, content: str, file_path: str) -> ValidationResult:
+        """Validate content and return results.
+        
+        Args:
+            content: The file content to validate
+            file_path: Path to the file (for error messages)
+            
+        Returns:
+            ValidationResult with errors, warnings, and info
+        """
+        pass
+    
+    @abstractmethod
+    def can_validate(self, file_path: str) -> bool:
+        """Check if this validator can validate the given file.
+        
+        Args:
+            file_path: Path to the file
+            
+        Returns:
+            True if this validator can handle the file
+        """
+        pass
+
+
+class DockerComposeValidator(ContentValidator):
+    """Validator for Docker Compose files."""
+    
+    COMPOSE_FILENAMES = {
+        "docker-compose.yml",
+        "docker-compose.yaml",
+        "compose.yml",
+        "compose.yaml",
+    }
+    
+    def can_validate(self, file_path: str) -> bool:
+        """Check if file is a Docker Compose file."""
+        filename = Path(file_path).name.lower()
+        return filename in self.COMPOSE_FILENAMES
+    
+    def validate(self, content: str, file_path: str) -> ValidationResult:
+        """Validate Docker Compose file structure."""
+        result = ValidationResult()
+        
+        try:
+            # Parse YAML
+            data = yaml.safe_load(content)
+            
+            if not isinstance(data, dict):
+                result.add_error("Docker Compose file must be a YAML dictionary")
+                return result
+            
+            # Check for version (optional in Compose v2, but good practice)
+            if "version" not in data:
+                result.add_info("No 'version' field specified (using Compose v2 format)")
+            
+            # Check for services (required)
+            if "services" not in data:
+                result.add_error("Missing required 'services' section")
+                return result
+            
+            services = data.get("services", {})
+            if not isinstance(services, dict):
+                result.add_error("'services' must be a dictionary")
+                return result
+            
+            if not services:
+                result.add_warning("No services defined")
+            
+            # Validate each service
+            for service_name, service_config in services.items():
+                self._validate_service(service_name, service_config, result)
+            
+            # Check for networks (optional but recommended)
+            if "networks" in data:
+                networks = data.get("networks", {})
+                if networks and isinstance(networks, dict):
+                    result.add_info(f"Defines {len(networks)} network(s)")
+            
+            # Check for volumes (optional)
+            if "volumes" in data:
+                volumes = data.get("volumes", {})
+                if volumes and isinstance(volumes, dict):
+                    result.add_info(f"Defines {len(volumes)} volume(s)")
+            
+        except yaml.YAMLError as e:
+            result.add_error(f"YAML parsing error: {e}")
+        except Exception as e:
+            result.add_error(f"Unexpected validation error: {e}")
+        
+        return result
+    
+    def _validate_service(self, name: str, config: Any, result: ValidationResult) -> None:
+        """Validate a single service configuration."""
+        if not isinstance(config, dict):
+            result.add_error(f"Service '{name}': configuration must be a dictionary")
+            return
+        
+        # Check for image or build (at least one required)
+        has_image = "image" in config
+        has_build = "build" in config
+        
+        if not has_image and not has_build:
+            result.add_error(f"Service '{name}': must specify 'image' or 'build'")
+        
+        # Warn about common misconfigurations
+        if "restart" in config:
+            restart_value = config["restart"]
+            valid_restart_policies = ["no", "always", "on-failure", "unless-stopped"]
+            if restart_value not in valid_restart_policies:
+                result.add_warning(
+                    f"Service '{name}': restart policy '{restart_value}' may be invalid. "
+                    f"Valid values: {', '.join(valid_restart_policies)}"
+                )
+        
+        # Check for environment variables
+        if "environment" in config:
+            env = config["environment"]
+            if isinstance(env, list):
+                # Check for duplicate keys in list format
+                keys = [e.split("=")[0] for e in env if isinstance(e, str) and "=" in e]
+                duplicates = {k for k in keys if keys.count(k) > 1}
+                if duplicates:
+                    result.add_warning(
+                        f"Service '{name}': duplicate environment variables: {', '.join(duplicates)}"
+                    )
+        
+        # Check for ports
+        if "ports" in config:
+            ports = config["ports"]
+            if not isinstance(ports, list):
+                result.add_warning(f"Service '{name}': 'ports' should be a list")
+
+
+class YAMLValidator(ContentValidator):
+    """Basic YAML syntax validator."""
+    
+    def can_validate(self, file_path: str) -> bool:
+        """Check if file is a YAML file."""
+        return Path(file_path).suffix.lower() in [".yml", ".yaml"]
+    
+    def validate(self, content: str, file_path: str) -> ValidationResult:
+        """Validate YAML syntax."""
+        result = ValidationResult()
+        
+        try:
+            yaml.safe_load(content)
+            result.add_info("YAML syntax is valid")
+        except yaml.YAMLError as e:
+            result.add_error(f"YAML parsing error: {e}")
+        
+        return result
+
+
+class ValidatorRegistry:
+    """Registry for content validators."""
+    
+    def __init__(self):
+        self.validators: List[ContentValidator] = []
+        self._register_default_validators()
+    
+    def _register_default_validators(self) -> None:
+        """Register built-in validators."""
+        self.register(DockerComposeValidator())
+        self.register(YAMLValidator())
+    
+    def register(self, validator: ContentValidator) -> None:
+        """Register a validator.
+        
+        Args:
+            validator: The validator to register
+        """
+        self.validators.append(validator)
+        logger.debug(f"Registered validator: {validator.__class__.__name__}")
+    
+    def get_validator(self, file_path: str) -> Optional[ContentValidator]:
+        """Get the most appropriate validator for a file.
+        
+        Args:
+            file_path: Path to the file
+            
+        Returns:
+            ContentValidator if found, None otherwise
+        """
+        # Try specific validators first (e.g., DockerComposeValidator before YAMLValidator)
+        for validator in self.validators:
+            if validator.can_validate(file_path):
+                return validator
+        return None
+    
+    def validate_file(self, content: str, file_path: str) -> ValidationResult:
+        """Validate file content using appropriate validator.
+        
+        Args:
+            content: The file content
+            file_path: Path to the file
+            
+        Returns:
+            ValidationResult with validation results
+        """
+        validator = self.get_validator(file_path)
+        
+        if validator:
+            logger.debug(f"Validating {file_path} with {validator.__class__.__name__}")
+            return validator.validate(content, file_path)
+        
+        # No validator found - return empty result
+        result = ValidationResult()
+        result.add_info(f"No semantic validator available for {Path(file_path).suffix} files")
+        return result
+
+
+# Global registry instance
+_registry = ValidatorRegistry()
+
+
+def get_validator_registry() -> ValidatorRegistry:
+    """Get the global validator registry."""
+    return _registry

+ 377 - 0
cli/core/variable.py

@@ -0,0 +1,377 @@
+from __future__ import annotations
+
+from typing import Any, Dict, List, Optional, Set
+from urllib.parse import urlparse
+import logging
+import re
+
+logger = logging.getLogger(__name__)
+
+TRUE_VALUES = {"true", "1", "yes", "on"}
+FALSE_VALUES = {"false", "0", "no", "off"}
+EMAIL_REGEX = re.compile(r"^[^@\\s]+@[^@\\s]+\\.[^@\\s]+$")
+
+
+class Variable:
+  """Represents a single templating variable with lightweight validation."""
+
+  def __init__(self, data: dict[str, Any]) -> None:
+    """Initialize Variable from a dictionary containing variable specification.
+    
+    Args:
+        data: Dictionary containing variable specification with required 'name' key
+              and optional keys: description, type, options, prompt, value, default, section, origin
+              
+    Raises:
+        ValueError: If data is not a dict, missing 'name' key, or has invalid default value
+    """
+    # Validate input
+    if not isinstance(data, dict):
+      raise ValueError("Variable data must be a dictionary")
+    
+    if "name" not in data:
+      raise ValueError("Variable data must contain 'name' key")
+    
+    # Track which fields were explicitly provided in source data
+    self._explicit_fields: Set[str] = set(data.keys())
+    
+    # Initialize fields
+    self.name: str = data["name"]
+    self.description: Optional[str] = data.get("description") or data.get("display", "")
+    self.type: str = data.get("type", "str")
+    self.options: Optional[List[Any]] = data.get("options", [])
+    self.prompt: Optional[str] = data.get("prompt")
+    self.value: Any = data.get("value") if data.get("value") is not None else data.get("default")
+    self.origin: Optional[str] = data.get("origin")
+    self.sensitive: bool = data.get("sensitive", False)
+    # Optional extra explanation used by interactive prompts
+    self.extra: Optional[str] = data.get("extra")
+    # Flag indicating this variable should be auto-generated when empty
+    self.autogenerated: bool = data.get("autogenerated", False)
+    # Original value before config override (used for display)
+    self.original_value: Optional[Any] = data.get("original_value")
+
+    # Validate and convert the default/initial value if present
+    if self.value is not None:
+      try:
+        self.value = self.convert(self.value)
+      except ValueError as exc:
+        raise ValueError(f"Invalid default for variable '{self.name}': {exc}")
+
+
+  def convert(self, value: Any) -> Any:
+    """Validate and convert a raw value based on the variable type.
+    
+    This method performs type conversion but does NOT check if the value
+    is required. Use validate_and_convert() for full validation including
+    required field checks.
+    """
+    if value is None:
+      return None
+
+    # Treat empty strings as None to avoid storing "" for missing values.
+    if isinstance(value, str) and value.strip() == "":
+      return None
+
+    # Type conversion mapping for cleaner code
+    converters = {
+      "bool": self._convert_bool,
+      "int": self._convert_int, 
+      "float": self._convert_float,
+      "enum": self._convert_enum,
+      "url": self._convert_url,
+      "email": self._convert_email,
+    }
+    
+    converter = converters.get(self.type)
+    if converter:
+      return converter(value)
+    
+    # Default to string conversion
+    return str(value)
+  
+  def validate_and_convert(self, value: Any, check_required: bool = True) -> Any:
+    """Validate and convert a value with comprehensive checks.
+    
+    This method combines type conversion with validation logic including
+    required field checks. It's the recommended method for user input validation.
+    
+    Args:
+        value: The raw value to validate and convert
+        check_required: If True, raises ValueError for required fields with empty values
+        
+    Returns:
+        The converted and validated value
+        
+    Raises:
+        ValueError: If validation fails (invalid format, required field empty, etc.)
+        
+    Examples:
+        # Basic validation
+        var.validate_and_convert("example@email.com")  # Returns validated email
+        
+        # Required field validation
+        var.validate_and_convert("", check_required=True)  # Raises ValueError if required
+        
+        # Autogenerated variables - allow empty values
+        var.validate_and_convert("", check_required=False)  # Returns None for autogeneration
+    """
+    # First, convert the value using standard type conversion
+    converted = self.convert(value)
+    
+    # Special handling for autogenerated variables
+    # Allow empty values as they will be auto-generated later
+    if self.autogenerated and (converted is None or (isinstance(converted, str) and (converted == "" or converted == "*auto"))):
+      return None  # Signal that auto-generation should happen
+    
+    # Check if this is a required field and the value is empty
+    if check_required and self.is_required():
+      if converted is None or (isinstance(converted, str) and converted == ""):
+        raise ValueError("This field is required and cannot be empty")
+    
+    return converted
+
+  def _convert_bool(self, value: Any) -> bool:
+    """Convert value to boolean."""
+    if isinstance(value, bool):
+      return value
+    if isinstance(value, str):
+      lowered = value.strip().lower()
+      if lowered in TRUE_VALUES:
+        return True
+      if lowered in FALSE_VALUES:
+        return False
+    raise ValueError("value must be a boolean (true/false)")
+
+  def _convert_int(self, value: Any) -> Optional[int]:
+    """Convert value to integer."""
+    if isinstance(value, int):
+      return value
+    if isinstance(value, str) and value.strip() == "":
+      return None
+    try:
+      return int(value)
+    except (TypeError, ValueError) as exc:
+      raise ValueError("value must be an integer") from exc
+
+  def _convert_float(self, value: Any) -> Optional[float]:
+    """Convert value to float."""
+    if isinstance(value, float):
+      return value
+    if isinstance(value, str) and value.strip() == "":
+      return None
+    try:
+      return float(value)
+    except (TypeError, ValueError) as exc:
+      raise ValueError("value must be a float") from exc
+
+  def _convert_enum(self, value: Any) -> Optional[str]:
+    if value == "":
+      return None
+    val = str(value)
+    if self.options and val not in self.options:
+      raise ValueError(f"value must be one of: {', '.join(self.options)}")
+    return val
+
+  def _convert_url(self, value: Any) -> str:
+    val = str(value).strip()
+    if not val:
+      return None
+    parsed = urlparse(val)
+    if not (parsed.scheme and parsed.netloc):
+      raise ValueError("value must be a valid URL (include scheme and host)")
+    return val
+
+  def _convert_email(self, value: Any) -> str:
+    val = str(value).strip()
+    if not val:
+      return None
+    if not EMAIL_REGEX.fullmatch(val):
+      raise ValueError("value must be a valid email address")
+    return val
+
+  def to_dict(self) -> Dict[str, Any]:
+    """Serialize Variable to a dictionary for storage."""
+    result = {}
+    
+    # Always include type
+    if self.type:
+      result['type'] = self.type
+    
+    # Include value/default if not None
+    if self.value is not None:
+      result['default'] = self.value
+    
+    # Include string fields if truthy
+    for field in ('description', 'prompt', 'extra', 'origin'):
+      if value := getattr(self, field):
+        result[field] = value
+    
+    # Include boolean/list fields if truthy (but empty list is OK for options)
+    if self.sensitive:
+      result['sensitive'] = True
+    if self.autogenerated:
+      result['autogenerated'] = True
+    if self.options is not None:  # Allow empty list
+      result['options'] = self.options
+    
+    return result
+  
+  def get_display_value(self, mask_sensitive: bool = True, max_length: int = 30, show_none: bool = True) -> str:
+    """Get formatted display value with optional masking and truncation.
+    
+    Args:
+        mask_sensitive: If True, mask sensitive values with asterisks
+        max_length: Maximum length before truncation (0 = no limit)
+        show_none: If True, display "(none)" for None values instead of empty string
+        
+    Returns:
+        Formatted string representation of the value
+    """
+    if self.value is None or self.value == "":
+      # Show (*auto) for autogenerated variables instead of (none)
+      if self.autogenerated:
+        return "[dim](*auto)[/dim]" if show_none else ""
+      return "[dim](none)[/dim]" if show_none else ""
+    
+    # Mask sensitive values
+    if self.sensitive and mask_sensitive:
+      return "********"
+    
+    # Convert to string
+    display = str(self.value)
+    
+    # Truncate if needed
+    if max_length > 0 and len(display) > max_length:
+      return display[:max_length - 3] + "..."
+    
+    return display
+  
+  def get_normalized_default(self) -> Any:
+    """Get normalized default value suitable for prompts and display."""
+    try:
+      typed = self.convert(self.value)
+    except Exception:
+      typed = self.value
+    
+    # Autogenerated: return display hint
+    if self.autogenerated and not typed:
+      return "*auto"
+    
+    # Type-specific handlers
+    if self.type == "enum":
+      if not self.options:
+        return typed
+      return self.options[0] if typed is None or str(typed) not in self.options else str(typed)
+    
+    if self.type == "bool":
+      return typed if isinstance(typed, bool) else (None if typed is None else bool(typed))
+    
+    if self.type == "int":
+      try:
+        return int(typed) if typed not in (None, "") else None
+      except Exception:
+        return None
+    
+    # Default: return string or None
+    return None if typed is None else str(typed)
+  
+  def get_prompt_text(self) -> str:
+    """Get formatted prompt text for interactive input.
+    
+    Returns:
+        Prompt text with optional type hints and descriptions
+    """
+    prompt_text = self.prompt or self.description or self.name
+    
+    # Add type hint for semantic types if there's a default
+    if self.value is not None and self.type in ["email", "url"]:
+      prompt_text += f" ({self.type})"
+    
+    return prompt_text
+  
+  def get_validation_hint(self) -> Optional[str]:
+    """Get validation hint for prompts (e.g., enum options).
+    
+    Returns:
+        Formatted hint string or None if no hint needed
+    """
+    hints = []
+    
+    # Add enum options
+    if self.type == "enum" and self.options:
+      hints.append(f"Options: {', '.join(self.options)}")
+    
+    # Add extra help text
+    if self.extra:
+      hints.append(self.extra)
+    
+    return " — ".join(hints) if hints else None
+  
+  def is_required(self) -> bool:
+    """Check if this variable requires a value (cannot be empty/None).
+    
+    A variable is considered required if:
+    - It doesn't have a default value (value is None)
+    - It's not marked as autogenerated (which can be empty and generated later)
+    - It's not a boolean type (booleans default to False if not set)
+    
+    Returns:
+        True if the variable must have a non-empty value, False otherwise
+    """
+    # Autogenerated variables can be empty (will be generated later)
+    if self.autogenerated:
+      return False
+    
+    # Boolean variables always have a value (True or False)
+    if self.type == "bool":
+      return False
+    
+    # Variables with a default value are not required
+    if self.value is not None:
+      return False
+    
+    # No default value and not autogenerated = required
+    return True
+  
+  def clone(self, update: Optional[Dict[str, Any]] = None) -> 'Variable':
+    """Create a deep copy of the variable with optional field updates.
+    
+    This is more efficient than converting to dict and back when copying variables.
+    
+    Args:
+        update: Optional dictionary of field updates to apply to the clone
+        
+    Returns:
+        New Variable instance with copied data
+        
+    Example:
+        var2 = var1.clone(update={'origin': 'template'})
+    """
+    data = {
+      'name': self.name,
+      'type': self.type,
+      'value': self.value,
+      'description': self.description,
+      'prompt': self.prompt,
+      'options': self.options.copy() if self.options else None,
+      'origin': self.origin,
+      'sensitive': self.sensitive,
+      'extra': self.extra,
+      'autogenerated': self.autogenerated,
+      'original_value': self.original_value,
+    }
+    
+    # Apply updates if provided
+    if update:
+      data.update(update)
+    
+    # Create new variable
+    cloned = Variable(data)
+    
+    # Preserve explicit fields from original, and add any update keys
+    cloned._explicit_fields = self._explicit_fields.copy()
+    if update:
+      cloned._explicit_fields.update(update.keys())
+    
+    return cloned

+ 0 - 1178
cli/core/variables.py

@@ -1,1178 +0,0 @@
-from __future__ import annotations
-
-from collections import OrderedDict
-from dataclasses import dataclass, field
-from typing import Any, Dict, List, Optional, Set, Union
-from urllib.parse import urlparse
-import logging
-import re
-
-logger = logging.getLogger(__name__)
-
-TRUE_VALUES = {"true", "1", "yes", "on"}
-FALSE_VALUES = {"false", "0", "no", "off"}
-HOSTNAME_REGEX = re.compile(r"^(?=.{1,253}$)(?!-)[A-Za-z0-9_-]{1,63}(?<!-)(\.(?!-)[A-Za-z0-9_-]{1,63}(?<!-))*$")
-EMAIL_REGEX = re.compile(r"^[^@\s]+@[^@\s]+\.[^@\s]+$")
-
-
-class Variable:
-  """Represents a single templating variable with lightweight validation."""
-
-  def __init__(self, data: dict[str, Any]) -> None:
-    """Initialize Variable from a dictionary containing variable specification.
-    
-    Args:
-        data: Dictionary containing variable specification with required 'name' key
-              and optional keys: description, type, options, prompt, value, default, section, origin
-              
-    Raises:
-        ValueError: If data is not a dict, missing 'name' key, or has invalid default value
-    """
-    # Validate input
-    if not isinstance(data, dict):
-      raise ValueError("Variable data must be a dictionary")
-    
-    if "name" not in data:
-      raise ValueError("Variable data must contain 'name' key")
-    
-    # Track which fields were explicitly provided in source data
-    self._explicit_fields: Set[str] = set(data.keys())
-    
-    # Initialize fields
-    self.name: str = data["name"]
-    self.description: Optional[str] = data.get("description") or data.get("display", "")
-    self.type: str = data.get("type", "str")
-    self.options: Optional[List[Any]] = data.get("options", [])
-    self.prompt: Optional[str] = data.get("prompt")
-    self.value: Any = data.get("value") if data.get("value") is not None else data.get("default")
-    self.section: Optional[str] = data.get("section")
-    self.origin: Optional[str] = data.get("origin")
-    self.sensitive: bool = data.get("sensitive", False)
-    # Optional extra explanation used by interactive prompts
-    self.extra: Optional[str] = data.get("extra")
-    # Flag indicating this variable should be auto-generated when empty
-    self.autogenerated: bool = data.get("autogenerated", False)
-    # Original value before config override (used for display)
-    self.original_value: Optional[Any] = data.get("original_value")
-
-    # Validate and convert the default/initial value if present
-    if self.value is not None:
-      try:
-        self.value = self.convert(self.value)
-      except ValueError as exc:
-        raise ValueError(f"Invalid default for variable '{self.name}': {exc}")
-
-  def _validate_not_empty(self, value: Any, converted_value: Any) -> None:
-    """Validate that a value is not empty for non-boolean types."""
-    if self.type not in ["bool"] and (converted_value is None or converted_value == ""):
-      raise ValueError("value cannot be empty")
-
-  def _validate_enum_option(self, value: str) -> None:
-    """Validate that a value is in the allowed enum options."""
-    if self.options and value not in self.options:
-      raise ValueError(f"value must be one of: {', '.join(self.options)}")
-
-  def _validate_regex_pattern(self, value: str, pattern: re.Pattern, error_msg: str) -> None:
-    """Validate that a value matches a regex pattern."""
-    if not pattern.fullmatch(value):
-      raise ValueError(error_msg)
-
-  def _validate_url_structure(self, parsed_url) -> None:
-    """Validate that a parsed URL has required components."""
-    if not (parsed_url.scheme and parsed_url.netloc):
-      raise ValueError("value must be a valid URL (include scheme and host)")
-
-  def convert(self, value: Any) -> Any:
-    """Validate and convert a raw value based on the variable type."""
-    if value is None:
-      return None
-
-    # Treat empty strings as None to avoid storing "" for missing values.
-    if isinstance(value, str) and value.strip() == "":
-      return None
-
-    # Type conversion mapping for cleaner code
-    converters = {
-      "bool": self._convert_bool,
-      "int": self._convert_int, 
-      "float": self._convert_float,
-      "enum": self._convert_enum,
-      "hostname": self._convert_hostname,
-      "url": self._convert_url,
-      "email": self._convert_email,
-    }
-    
-    converter = converters.get(self.type)
-    if converter:
-      return converter(value)
-    
-    # Default to string conversion
-    return str(value)
-
-  def _convert_bool(self, value: Any) -> bool:
-    """Convert value to boolean."""
-    if isinstance(value, bool):
-      return value
-    if isinstance(value, str):
-      lowered = value.strip().lower()
-      if lowered in TRUE_VALUES:
-        return True
-      if lowered in FALSE_VALUES:
-        return False
-    raise ValueError("value must be a boolean (true/false)")
-
-  def _convert_int(self, value: Any) -> Optional[int]:
-    """Convert value to integer."""
-    if isinstance(value, int):
-      return value
-    if isinstance(value, str) and value.strip() == "":
-      return None
-    try:
-      return int(value)
-    except (TypeError, ValueError) as exc:
-      raise ValueError("value must be an integer") from exc
-
-  def _convert_float(self, value: Any) -> Optional[float]:
-    """Convert value to float."""
-    if isinstance(value, float):
-      return value
-    if isinstance(value, str) and value.strip() == "":
-      return None
-    try:
-      return float(value)
-    except (TypeError, ValueError) as exc:
-      raise ValueError("value must be a float") from exc
-
-  def _convert_enum(self, value: Any) -> Optional[str]:
-    """Convert value to enum option."""
-    if value == "":
-      return None
-    val = str(value)
-    self._validate_enum_option(val)
-    return val
-
-  def _convert_hostname(self, value: Any) -> str:
-    """Convert and validate hostname."""
-    val = str(value).strip()
-    if not val:
-      return None
-    if val.lower() != "localhost":
-      self._validate_regex_pattern(val, HOSTNAME_REGEX, "value must be a valid hostname")
-    return val
-
-  def _convert_url(self, value: Any) -> str:
-    """Convert and validate URL."""
-    val = str(value).strip()
-    if not val:
-      return None
-    parsed = urlparse(val)
-    self._validate_url_structure(parsed)
-    return val
-
-  def _convert_email(self, value: Any) -> str:
-    """Convert and validate email."""
-    val = str(value).strip()
-    if not val:
-      return None
-    self._validate_regex_pattern(val, EMAIL_REGEX, "value must be a valid email address")
-    return val
-
-  def get_typed_value(self) -> Any:
-    """Return the stored value converted to the appropriate Python type."""
-    return self.convert(self.value)
-  
-  def to_dict(self) -> Dict[str, Any]:
-    """Serialize Variable to a dictionary for storage.
-    
-    Returns:
-        Dictionary representation of the variable with only relevant fields.
-    """
-    var_dict = {}
-    
-    if self.type:
-      var_dict["type"] = self.type
-    
-    if self.value is not None:
-      var_dict["default"] = self.value
-    
-    if self.description:
-      var_dict["description"] = self.description
-    
-    if self.prompt:
-      var_dict["prompt"] = self.prompt
-    
-    if self.sensitive:
-      var_dict["sensitive"] = self.sensitive
-    
-    if self.extra:
-      var_dict["extra"] = self.extra
-    
-    if self.autogenerated:
-      var_dict["autogenerated"] = self.autogenerated
-    
-    if self.options:
-      var_dict["options"] = self.options
-    
-    if self.origin:
-      var_dict["origin"] = self.origin
-    
-    return var_dict
-  
-  def get_display_value(self, mask_sensitive: bool = True, max_length: int = 30, show_none: bool = True) -> str:
-    """Get formatted display value with optional masking and truncation.
-    
-    Args:
-        mask_sensitive: If True, mask sensitive values with asterisks
-        max_length: Maximum length before truncation (0 = no limit)
-        show_none: If True, display "(none)" for None values instead of empty string
-        
-    Returns:
-        Formatted string representation of the value
-    """
-    if self.value is None or self.value == "":
-      # Show (*auto) for autogenerated variables instead of (none)
-      if self.autogenerated:
-        return "[dim](*auto)[/dim]" if show_none else ""
-      return "[dim](none)[/dim]" if show_none else ""
-    
-    # Mask sensitive values
-    if self.sensitive and mask_sensitive:
-      return "********"
-    
-    # Convert to string
-    display = str(self.value)
-    
-    # Truncate if needed
-    if max_length > 0 and len(display) > max_length:
-      return display[:max_length - 3] + "..."
-    
-    return display
-  
-  def get_normalized_default(self) -> Any:
-    """Get normalized default value suitable for prompts and display.
-    
-    Handles type conversion and provides sensible defaults for different types.
-    Especially useful for enum, bool, and int types in interactive prompts.
-    
-    For autogenerated variables, returns "autogenerated" as a display hint.
-    
-    Returns:
-        Normalized default value appropriate for the variable type
-    """
-    try:
-      typed = self.get_typed_value()
-    except Exception:
-      typed = self.value
-    
-    # Autogenerated: return display hint
-    if self.autogenerated and (typed is None or typed == ""):
-      return "*auto"
-    
-    # Enum: ensure default is valid option
-    if self.type == "enum":
-      if not self.options:
-        return typed
-      # If typed is invalid or missing, use first option
-      if typed is None or str(typed) not in self.options:
-        return self.options[0]
-      return str(typed)
-    
-    # Boolean: return as bool type
-    if self.type == "bool":
-      if isinstance(typed, bool):
-        return typed
-      return None if typed is None else bool(typed)
-    
-    # Integer: return as int type
-    if self.type == "int":
-      try:
-        return int(typed) if typed is not None and typed != "" else None
-      except Exception:
-        return None
-    
-    # Default: return string or None
-    return None if typed is None else str(typed)
-  
-  def get_prompt_text(self) -> str:
-    """Get formatted prompt text for interactive input.
-    
-    Returns:
-        Prompt text with optional type hints and descriptions
-    """
-    prompt_text = self.prompt or self.description or self.name
-    
-    # Add type hint for semantic types if there's a default
-    if self.value is not None and self.type in ["hostname", "email", "url"]:
-      prompt_text += f" ({self.type})"
-    
-    return prompt_text
-  
-  def get_validation_hint(self) -> Optional[str]:
-    """Get validation hint for prompts (e.g., enum options).
-    
-    Returns:
-        Formatted hint string or None if no hint needed
-    """
-    hints = []
-    
-    # Add enum options
-    if self.type == "enum" and self.options:
-      hints.append(f"Options: {', '.join(self.options)}")
-    
-    # Add extra help text
-    if self.extra:
-      hints.append(self.extra)
-    
-    return " — ".join(hints) if hints else None
-  
-  def is_required(self) -> bool:
-    """Check if this variable requires a value (cannot be empty/None).
-    
-    A variable is considered required if:
-    - It doesn't have a default value (value is None)
-    - It's not marked as autogenerated (which can be empty and generated later)
-    - It's not a boolean type (booleans default to False if not set)
-    
-    Returns:
-        True if the variable must have a non-empty value, False otherwise
-    """
-    # Autogenerated variables can be empty (will be generated later)
-    if self.autogenerated:
-      return False
-    
-    # Boolean variables always have a value (True or False)
-    if self.type == "bool":
-      return False
-    
-    # Variables with a default value are not required
-    if self.value is not None:
-      return False
-    
-    # No default value and not autogenerated = required
-    return True
-  
-  def clone(self, update: Optional[Dict[str, Any]] = None) -> 'Variable':
-    """Create a deep copy of the variable with optional field updates.
-    
-    This is more efficient than converting to dict and back when copying variables.
-    
-    Args:
-        update: Optional dictionary of field updates to apply to the clone
-        
-    Returns:
-        New Variable instance with copied data
-        
-    Example:
-        var2 = var1.clone(update={'origin': 'template'})
-    """
-    data = {
-      'name': self.name,
-      'type': self.type,
-      'value': self.value,
-      'description': self.description,
-      'prompt': self.prompt,
-      'options': self.options.copy() if self.options else None,
-      'section': self.section,
-      'origin': self.origin,
-      'sensitive': self.sensitive,
-      'extra': self.extra,
-      'autogenerated': self.autogenerated,
-      'original_value': self.original_value,
-    }
-    
-    # Apply updates if provided
-    if update:
-      data.update(update)
-    
-    # Create new variable
-    cloned = Variable(data)
-    
-    # Preserve explicit fields from original, and add any update keys
-    cloned._explicit_fields = self._explicit_fields.copy()
-    if update:
-      cloned._explicit_fields.update(update.keys())
-    
-    return cloned
-  
-class VariableSection:
-  """Groups variables together with shared metadata for presentation."""
-
-  def __init__(self, data: dict[str, Any]) -> None:
-    """Initialize VariableSection from a dictionary.
-    
-    Args:
-        data: Dictionary containing section specification with required 'key' and 'title' keys
-    """
-    if not isinstance(data, dict):
-      raise ValueError("VariableSection data must be a dictionary")
-    
-    if "key" not in data:
-      raise ValueError("VariableSection data must contain 'key'")
-    
-    if "title" not in data:
-      raise ValueError("VariableSection data must contain 'title'")
-    
-    self.key: str = data["key"]
-    self.title: str = data["title"]
-    self.variables: OrderedDict[str, Variable] = OrderedDict()
-    self.description: Optional[str] = data.get("description")
-    self.toggle: Optional[str] = data.get("toggle")
-    # Default "general" section to required=True, all others to required=False
-    self.required: bool = data.get("required", data["key"] == "general")
-    # Section dependencies - can be string or list of strings
-    needs_value = data.get("needs")
-    if needs_value:
-      if isinstance(needs_value, str):
-        self.needs: List[str] = [needs_value]
-      elif isinstance(needs_value, list):
-        self.needs: List[str] = needs_value
-      else:
-        raise ValueError(f"Section '{self.key}' has invalid 'needs' value: must be string or list")
-    else:
-      self.needs: List[str] = []
-
-  def variable_names(self) -> list[str]:
-    return list(self.variables.keys())
-  
-  def to_dict(self) -> Dict[str, Any]:
-    """Serialize VariableSection to a dictionary for storage.
-    
-    Returns:
-        Dictionary representation of the section with all metadata and variables.
-    """
-    section_dict = {}
-    
-    if self.title:
-      section_dict["title"] = self.title
-    
-    if self.description:
-      section_dict["description"] = self.description
-    
-    if self.toggle:
-      section_dict["toggle"] = self.toggle
-    
-    # Always store required flag
-    section_dict["required"] = self.required
-    
-    # Store dependencies if any
-    if self.needs:
-      section_dict["needs"] = self.needs if len(self.needs) > 1 else self.needs[0]
-    
-    # Serialize all variables using their own to_dict method
-    section_dict["vars"] = {}
-    for var_name, variable in self.variables.items():
-      section_dict["vars"][var_name] = variable.to_dict()
-    
-    return section_dict
-  
-  def is_enabled(self) -> bool:
-    """Check if section is currently enabled based on toggle variable.
-    
-    Returns:
-        True if section is enabled (no toggle or toggle is True), False otherwise
-    """
-    if not self.toggle:
-      return True
-    
-    toggle_var = self.variables.get(self.toggle)
-    if not toggle_var:
-      return True
-    
-    try:
-      return bool(toggle_var.get_typed_value())
-    except Exception:
-      return False
-  
-  def get_toggle_value(self) -> Optional[bool]:
-    """Get the current value of the toggle variable.
-    
-    Returns:
-        Boolean value of toggle variable, or None if no toggle exists
-    """
-    if not self.toggle:
-      return None
-    
-    toggle_var = self.variables.get(self.toggle)
-    if not toggle_var:
-      return None
-    
-    try:
-      return bool(toggle_var.get_typed_value())
-    except Exception:
-      return None
-  
-  def clone(self, origin_update: Optional[str] = None) -> 'VariableSection':
-    """Create a deep copy of the section with all variables.
-    
-    This is more efficient than converting to dict and back when copying sections.
-    
-    Args:
-        origin_update: Optional origin string to apply to all cloned variables
-        
-    Returns:
-        New VariableSection instance with deep-copied variables
-        
-    Example:
-        section2 = section1.clone(origin_update='template')
-    """
-    # Create new section with same metadata
-    cloned = VariableSection({
-      'key': self.key,
-      'title': self.title,
-      'description': self.description,
-      'toggle': self.toggle,
-      'required': self.required,
-      'needs': self.needs.copy() if self.needs else None,
-    })
-    
-    # Deep copy all variables
-    for var_name, variable in self.variables.items():
-      if origin_update:
-        cloned.variables[var_name] = variable.clone(update={'origin': origin_update})
-      else:
-        cloned.variables[var_name] = variable.clone()
-    
-    return cloned
-
-class VariableCollection:
-  """Manages variables grouped by sections and builds Jinja context."""
-
-  def __init__(self, spec: dict[str, Any]) -> None:
-    """Initialize VariableCollection from a specification dictionary.
-    
-    Args:
-        spec: Dictionary containing the complete variable specification structure
-              Expected format (as used in compose.py):
-              {
-                "section_key": {
-                  "title": "Section Title",
-                  "prompt": "Optional prompt text",
-                  "toggle": "optional_toggle_var_name", 
-                  "description": "Optional description",
-                  "vars": {
-                    "var_name": {
-                      "description": "Variable description",
-                      "type": "str",
-                      "default": "default_value",
-                      ...
-                    }
-                  }
-                }
-              }
-    """
-    if not isinstance(spec, dict):
-      raise ValueError("Spec must be a dictionary")
-    
-    self._sections: Dict[str, VariableSection] = {}
-    # NOTE: The _variable_map provides a flat, O(1) lookup for any variable by its name,
-    # avoiding the need to iterate through sections. It stores references to the same
-    # Variable objects contained in the _set structure.
-    self._variable_map: Dict[str, Variable] = {}
-    self._initialize_sections(spec)
-    # Validate dependencies after all sections are loaded
-    self._validate_dependencies()
-
-  def _initialize_sections(self, spec: dict[str, Any]) -> None:
-    """Initialize sections from the spec."""
-    for section_key, section_data in spec.items():
-      if not isinstance(section_data, dict):
-        continue
-      
-      section = self._create_section(section_key, section_data)
-      # Guard against None from empty YAML sections (vars: with no content)
-      vars_data = section_data.get("vars") or {}
-      self._initialize_variables(section, vars_data)
-      self._sections[section_key] = section
-    
-    # Validate all variable names are unique across sections
-    self._validate_unique_variable_names()
-
-  def _create_section(self, key: str, data: dict[str, Any]) -> VariableSection:
-    """Create a VariableSection from data."""
-    section_init_data = {
-      "key": key,
-      "title": data.get("title", key.replace("_", " ").title()),
-      "description": data.get("description"),
-      "toggle": data.get("toggle"),
-      "required": data.get("required", key == "general"),
-      "needs": data.get("needs")
-    }
-    return VariableSection(section_init_data)
-
-  def _initialize_variables(self, section: VariableSection, vars_data: dict[str, Any]) -> None:
-    """Initialize variables for a section."""
-    # Guard against None from empty YAML sections
-    if vars_data is None:
-      vars_data = {}
-    
-    for var_name, var_data in vars_data.items():
-      var_init_data = {"name": var_name, **var_data}
-      variable = Variable(var_init_data)
-      section.variables[var_name] = variable
-      # NOTE: Populate the direct lookup map for efficient access.
-      self._variable_map[var_name] = variable
-    
-    # Validate toggle variable after all variables are added
-    self._validate_section_toggle(section)
-    # TODO: Add more section-level validation:
-    #   - Validate that required sections have at least one non-toggle variable
-    #   - Validate that enum variables have non-empty options lists
-    #   - Validate that variable names follow naming conventions (e.g., lowercase_with_underscores)
-    #   - Validate that default values are compatible with their type definitions
-
-  def _validate_unique_variable_names(self) -> None:
-    """Validate that all variable names are unique across all sections.
-    
-    This prevents variable name conflicts that could cause confusion when:
-    - Building Jinja2 context (later variables overwrite earlier ones)
-    - Using --var CLI overrides (unclear which section is affected)
-    - Reading/setting defaults (ambiguous which variable is referenced)
-    
-    Raises:
-        ValueError: If duplicate variable names are found across sections
-    """
-    var_to_sections: Dict[str, List[str]] = {}
-    
-    # Build mapping of variable names to sections they appear in
-    for section_key, section in self._sections.items():
-      for var_name in section.variables.keys():
-        if var_name not in var_to_sections:
-          var_to_sections[var_name] = []
-        var_to_sections[var_name].append(section_key)
-    
-    # Find duplicates
-    duplicates = {var: sections for var, sections in var_to_sections.items() if len(sections) > 1}
-    
-    if duplicates:
-      error_lines = [
-        "Variable names must be unique across all sections, but found duplicates:"
-      ]
-      for var_name, sections in sorted(duplicates.items()):
-        error_lines.append(f"  - '{var_name}' appears in sections: {', '.join(sections)}")
-      error_lines.append("\nPlease rename variables to be unique or consolidate them into a single section.")
-      
-      error_msg = "\n".join(error_lines)
-      logger.error(error_msg)
-      raise ValueError(error_msg)
-  
-  def _validate_section_toggle(self, section: VariableSection) -> None:
-    """Validate that toggle variable is of type bool if it exists.
-    
-    If the toggle variable doesn't exist (e.g., filtered out), removes the toggle.
-    
-    Args:
-        section: The section to validate
-        
-    Raises:
-        ValueError: If toggle variable exists but is not boolean type
-    """
-    if not section.toggle:
-      return
-    
-    toggle_var = section.variables.get(section.toggle)
-    if not toggle_var:
-      # Toggle variable doesn't exist (e.g., was filtered out) - remove toggle metadata
-      section.toggle = None
-      return
-    
-    if toggle_var.type != "bool":
-      raise ValueError(
-        f"Section '{section.key}' toggle variable '{section.toggle}' must be type 'bool', "
-        f"but is type '{toggle_var.type}'"
-      )
-  
-  def _validate_dependencies(self) -> None:
-    """Validate section dependencies for cycles and missing references.
-    
-    Raises:
-        ValueError: If circular dependencies or missing section references are found
-    """
-    # Check for missing dependencies
-    for section_key, section in self._sections.items():
-      for dep in section.needs:
-        if dep not in self._sections:
-          raise ValueError(
-            f"Section '{section_key}' depends on '{dep}', but '{dep}' does not exist"
-          )
-    
-    # Check for circular dependencies using depth-first search
-    visited = set()
-    rec_stack = set()
-    
-    def has_cycle(section_key: str) -> bool:
-      visited.add(section_key)
-      rec_stack.add(section_key)
-      
-      section = self._sections[section_key]
-      for dep in section.needs:
-        if dep not in visited:
-          if has_cycle(dep):
-            return True
-        elif dep in rec_stack:
-          raise ValueError(
-            f"Circular dependency detected: '{section_key}' depends on '{dep}', "
-            f"which creates a cycle"
-          )
-      
-      rec_stack.remove(section_key)
-      return False
-    
-    for section_key in self._sections:
-      if section_key not in visited:
-        has_cycle(section_key)
-  
-  def is_section_satisfied(self, section_key: str) -> bool:
-    """Check if all dependencies for a section are satisfied.
-    
-    A dependency is satisfied if:
-    1. The dependency section exists
-    2. The dependency section is enabled (if it has a toggle)
-    
-    Args:
-        section_key: The key of the section to check
-        
-    Returns:
-        True if all dependencies are satisfied, False otherwise
-    """
-    section = self._sections.get(section_key)
-    if not section:
-      return False
-    
-    # No dependencies = always satisfied
-    if not section.needs:
-      return True
-    
-    # Check each dependency
-    for dep_key in section.needs:
-      dep_section = self._sections.get(dep_key)
-      if not dep_section:
-        logger.warning(f"Section '{section_key}' depends on missing section '{dep_key}'")
-        return False
-      
-      # Check if dependency is enabled
-      if not dep_section.is_enabled():
-        logger.debug(f"Section '{section_key}' dependency '{dep_key}' is disabled")
-        return False
-    
-    return True
-
-  def sort_sections(self) -> None:
-    """Sort sections with the following priority:
-    
-    1. Dependencies come before dependents (topological sort)
-    2. Required sections first (in their original order)
-    3. Enabled sections with satisfied dependencies next (in their original order)
-    4. Disabled sections or sections with unsatisfied dependencies last (in their original order)
-    
-    This maintains the original ordering within each group while organizing
-    sections logically for display and user interaction, and ensures that
-    sections are prompted in the correct dependency order.
-    """
-    # First, perform topological sort to respect dependencies
-    sorted_keys = self._topological_sort()
-    
-    # Then apply priority sorting within dependency groups
-    section_items = [(key, self._sections[key]) for key in sorted_keys]
-    
-    # Define sort key: (priority, original_index)
-    # Priority: 0 = required, 1 = enabled with satisfied dependencies, 2 = disabled or unsatisfied dependencies
-    def get_sort_key(item_with_index):
-      index, (key, section) = item_with_index
-      if section.required:
-        priority = 0
-      elif section.is_enabled() and self.is_section_satisfied(key):
-        priority = 1
-      else:
-        priority = 2
-      return (priority, index)
-    
-    # Sort with original index to maintain order within each priority group
-    # Note: This preserves the topological order from earlier
-    sorted_items = sorted(
-      enumerate(section_items),
-      key=get_sort_key
-    )
-    
-    # Rebuild _sections dict in new order
-    self._sections = {key: section for _, (key, section) in sorted_items}
-  
-  def _topological_sort(self) -> List[str]:
-    """Perform topological sort on sections based on dependencies.
-    
-    Uses Kahn's algorithm to ensure dependencies come before dependents.
-    Preserves original order when no dependencies exist.
-    
-    Returns:
-        List of section keys in topologically sorted order
-    """
-    # Calculate in-degree (number of dependencies) for each section
-    in_degree = {key: len(section.needs) for key, section in self._sections.items()}
-    
-    # Find all sections with no dependencies
-    queue = [key for key, degree in in_degree.items() if degree == 0]
-    result = []
-    
-    # Process sections in order
-    while queue:
-      # Sort queue to preserve original order when possible
-      queue.sort(key=lambda k: list(self._sections.keys()).index(k))
-      
-      current = queue.pop(0)
-      result.append(current)
-      
-      # Find sections that depend on current
-      for key, section in self._sections.items():
-        if current in section.needs:
-          in_degree[key] -= 1
-          if in_degree[key] == 0:
-            queue.append(key)
-    
-    # If not all sections processed, there's a cycle (shouldn't happen due to validation)
-    if len(result) != len(self._sections):
-      logger.warning("Topological sort incomplete - possible dependency cycle")
-      return list(self._sections.keys())
-    
-    return result
-
-  def get_sections(self) -> Dict[str, VariableSection]:
-    """Get all sections in the collection."""
-    return self._sections.copy()
-  
-  def get_section(self, key: str) -> Optional[VariableSection]:
-    """Get a specific section by its key."""
-    return self._sections.get(key)
-  
-  def has_sections(self) -> bool:
-    """Check if the collection has any sections."""
-    return bool(self._sections)
-
-  def get_all_values(self) -> dict[str, Any]:
-    """Get all variable values as a dictionary."""
-    # NOTE: This method is optimized to use the _variable_map for direct O(1) access
-    # to each variable, which is much faster than iterating through sections.
-    all_values = {}
-    for var_name, variable in self._variable_map.items():
-      all_values[var_name] = variable.get_typed_value()
-    return all_values
-  
-  def get_satisfied_values(self) -> dict[str, Any]:
-    """Get variable values only from sections with satisfied dependencies.
-    
-    This respects both toggle states and section dependencies, ensuring that:
-    - Variables from disabled sections (toggle=false) are excluded
-    - Variables from sections with unsatisfied dependencies are excluded
-    
-    Returns:
-        Dictionary of variable names to values for satisfied sections only
-    """
-    satisfied_values = {}
-    
-    for section_key, section in self._sections.items():
-      # Skip sections with unsatisfied dependencies
-      if not self.is_section_satisfied(section_key):
-        logger.debug(f"Excluding variables from section '{section_key}' - dependencies not satisfied")
-        continue
-      
-      # Skip disabled sections (toggle check)
-      if not section.is_enabled():
-        logger.debug(f"Excluding variables from section '{section_key}' - section is disabled")
-        continue
-      
-      # Include all variables from this satisfied section
-      for var_name, variable in section.variables.items():
-        satisfied_values[var_name] = variable.get_typed_value()
-    
-    return satisfied_values
-
-  def get_sensitive_variables(self) -> Dict[str, Any]:
-    """Get only the sensitive variables with their values."""
-    return {name: var.value for name, var in self._variable_map.items() if var.sensitive and var.value}
-
-  def apply_defaults(self, defaults: dict[str, Any], origin: str = "cli") -> list[str]:
-    """Apply default values to variables, updating their origin.
-    
-    Args:
-        defaults: Dictionary mapping variable names to their default values
-        origin: Source of these defaults (e.g., 'config', 'cli')
-        
-    Returns:
-        List of variable names that were successfully updated
-    """
-    # NOTE: This method uses the _variable_map for a significant performance gain,
-    # as it allows direct O(1) lookup of variables instead of iterating
-    # through all sections to find a match.
-    successful = []
-    errors = []
-    
-    for var_name, value in defaults.items():
-      try:
-        variable = self._variable_map.get(var_name)
-        if not variable:
-          logger.warning(f"Variable '{var_name}' not found in template")
-          continue
-        
-        # Store original value before overriding (for display purposes)
-        # Only store if this is the first time config is being applied
-        if origin == "config" and not hasattr(variable, '_original_stored'):
-          variable.original_value = variable.value
-          variable._original_stored = True
-        
-        # Convert and set the new value
-        converted_value = variable.convert(value)
-        variable.value = converted_value
-        
-        # Set origin to the current source (not a chain)
-        variable.origin = origin
-        
-        successful.append(var_name)
-          
-      except ValueError as e:
-        error_msg = f"Invalid value for '{var_name}': {value} - {e}"
-        errors.append(error_msg)
-        logger.error(error_msg)
-    
-    if errors:
-      logger.warning(f"Some defaults failed to apply: {'; '.join(errors)}")
-    
-    return successful
-  
-  def validate_all(self) -> None:
-    """Validate all variables in the collection, skipping disabled and unsatisfied sections."""
-    errors: list[str] = []
-
-    for section_key, section in self._sections.items():
-      # Skip sections with unsatisfied dependencies
-      if not self.is_section_satisfied(section_key):
-        logger.debug(f"Skipping validation for section '{section_key}' - dependencies not satisfied")
-        continue
-      
-      # Check if the section is disabled by a toggle
-      if section.toggle:
-        toggle_var = section.variables.get(section.toggle)
-        if toggle_var and not toggle_var.get_typed_value():
-          logger.debug(f"Skipping validation for disabled section: '{section.key}'")
-          continue  # Skip this entire section
-
-      # Validate each variable in the section
-      for var_name, variable in section.variables.items():
-        try:
-          # Skip validation for autogenerated variables when empty/None
-          if variable.autogenerated and (variable.value is None or variable.value == ""):
-            logger.debug(f"Skipping validation for autogenerated variable: '{section.key}.{var_name}'")
-            continue
-          
-          # If value is None and the variable is required, report as missing
-          if variable.value is None:
-            if variable.is_required():
-              errors.append(f"{section.key}.{var_name} (required - no default provided)")
-            continue
-
-          # Attempt to convert/validate typed value
-          typed = variable.get_typed_value()
-
-          # For non-boolean types, treat None or empty string as invalid
-          if variable.type not in ("bool",) and (typed is None or typed == ""):
-            if variable.is_required():
-              errors.append(f"{section.key}.{var_name} (required - cannot be empty)")
-            else:
-              errors.append(f"{section.key}.{var_name} (empty)")
-
-        except ValueError as e:
-          errors.append(f"{section.key}.{var_name} (invalid format: {e})")
-
-    if errors:
-      error_msg = "Variable validation failed: " + ", ".join(errors)
-      logger.error(error_msg)
-      raise ValueError(error_msg)
-
-  def merge(self, other_spec: Union[Dict[str, Any], 'VariableCollection'], origin: str = "override") -> 'VariableCollection':
-    """Merge another spec or VariableCollection into this one with precedence tracking.
-    
-    OPTIMIZED: Works directly on objects without dict conversions for better performance.
-    
-    The other spec/collection has higher precedence and will override values in self.
-    Creates a new VariableCollection with merged data.
-    
-    Args:
-        other_spec: Either a spec dictionary or another VariableCollection to merge
-        origin: Origin label for variables from other_spec (e.g., 'template', 'config')
-        
-    Returns:
-        New VariableCollection with merged data
-        
-    Example:
-        module_vars = VariableCollection(module_spec)
-        template_vars = module_vars.merge(template_spec, origin='template')
-        # Variables from template_spec override module_spec
-        # Origins tracked: 'module' or 'module -> template'
-    """
-    # Convert dict to VariableCollection if needed (only once)
-    if isinstance(other_spec, dict):
-      other = VariableCollection(other_spec)
-    else:
-      other = other_spec
-    
-    # Create new collection without calling __init__ (optimization)
-    merged = VariableCollection.__new__(VariableCollection)
-    merged._sections = {}
-    merged._variable_map = {}
-    
-    # First pass: clone sections from self
-    for section_key, self_section in self._sections.items():
-      if section_key in other._sections:
-        # Section exists in both - will merge
-        merged._sections[section_key] = self._merge_sections(
-          self_section, 
-          other._sections[section_key], 
-          origin
-        )
-      else:
-        # Section only in self - clone it
-        merged._sections[section_key] = self_section.clone()
-    
-    # Second pass: add sections that only exist in other
-    for section_key, other_section in other._sections.items():
-      if section_key not in merged._sections:
-        # New section from other - clone with origin update
-        merged._sections[section_key] = other_section.clone(origin_update=origin)
-    
-    # Rebuild variable map for O(1) lookups
-    for section in merged._sections.values():
-      for var_name, variable in section.variables.items():
-        merged._variable_map[var_name] = variable
-    
-    return merged
-  
-  def _infer_origin_from_context(self) -> str:
-    """Infer origin from existing variables (fallback)."""
-    for section in self._sections.values():
-      for variable in section.variables.values():
-        if variable.origin:
-          return variable.origin
-    return "template"
-  
-  def _merge_sections(self, self_section: VariableSection, other_section: VariableSection, origin: str) -> VariableSection:
-    """Merge two sections, with other_section taking precedence.
-    
-    Args:
-        self_section: Base section
-        other_section: Section to merge in (takes precedence)
-        origin: Origin label for merged variables
-        
-    Returns:
-        New merged VariableSection
-    """
-    # Start with a clone of self_section
-    merged_section = self_section.clone()
-    
-    # Update section metadata from other (other takes precedence)
-    if other_section.title:
-      merged_section.title = other_section.title
-    if other_section.description:
-      merged_section.description = other_section.description
-    if other_section.toggle:
-      merged_section.toggle = other_section.toggle
-    # Required flag always updated
-    merged_section.required = other_section.required
-    # Needs/dependencies always updated
-    if other_section.needs:
-      merged_section.needs = other_section.needs.copy()
-    
-    # Merge variables
-    for var_name, other_var in other_section.variables.items():
-      if var_name in merged_section.variables:
-        # Variable exists in both - merge with other taking precedence
-        self_var = merged_section.variables[var_name]
-        
-        # Build update dict with ONLY explicitly provided fields from other
-        update = {}
-        if 'type' in other_var._explicit_fields and other_var.type:
-          update['type'] = other_var.type
-        if ('value' in other_var._explicit_fields or 'default' in other_var._explicit_fields) and other_var.value is not None:
-          update['value'] = other_var.value
-        if 'description' in other_var._explicit_fields and other_var.description:
-          update['description'] = other_var.description
-        if 'prompt' in other_var._explicit_fields and other_var.prompt:
-          update['prompt'] = other_var.prompt
-        if 'options' in other_var._explicit_fields and other_var.options:
-          update['options'] = other_var.options
-        if 'sensitive' in other_var._explicit_fields and other_var.sensitive:
-          update['sensitive'] = other_var.sensitive
-        if 'extra' in other_var._explicit_fields and other_var.extra:
-          update['extra'] = other_var.extra
-        
-        # Update origin tracking (only keep the current source, not the chain)
-        update['origin'] = origin
-        
-        # Clone with updates
-        merged_section.variables[var_name] = self_var.clone(update=update)
-      else:
-        # New variable from other - clone with origin
-        merged_section.variables[var_name] = other_var.clone(update={'origin': origin})
-    
-    return merged_section
-  
-  def filter_to_used(self, used_variables: Set[str], keep_sensitive: bool = True) -> 'VariableCollection':
-    """Filter collection to only variables that are used (or sensitive).
-    
-    OPTIMIZED: Works directly on objects without dict conversions for better performance.
-    
-    Creates a new VariableCollection containing only the variables in used_variables.
-    Sections with no remaining variables are removed.
-    
-    Args:
-        used_variables: Set of variable names that are actually used
-        keep_sensitive: If True, also keep sensitive variables even if not in used set
-        
-    Returns:
-        New VariableCollection with filtered variables
-        
-    Example:
-        all_vars = VariableCollection(spec)
-        used_vars = all_vars.filter_to_used({'var1', 'var2', 'var3'})
-        # Only var1, var2, var3 (and any sensitive vars) remain
-    """
-    # Create new collection without calling __init__ (optimization)
-    filtered = VariableCollection.__new__(VariableCollection)
-    filtered._sections = {}
-    filtered._variable_map = {}
-    
-    # Filter each section
-    for section_key, section in self._sections.items():
-      # Create a new section with same metadata
-      filtered_section = VariableSection({
-        'key': section.key,
-        'title': section.title,
-        'description': section.description,
-        'toggle': section.toggle,
-        'required': section.required,
-        'needs': section.needs.copy() if section.needs else None,
-      })
-      
-      # Clone only the variables that should be included
-      for var_name, variable in section.variables.items():
-        # Include if used OR if sensitive (and keep_sensitive is True)
-        should_include = (
-          var_name in used_variables or 
-          (keep_sensitive and variable.sensitive)
-        )
-        
-        if should_include:
-          filtered_section.variables[var_name] = variable.clone()
-      
-      # Only add section if it has variables
-      if filtered_section.variables:
-        filtered._sections[section_key] = filtered_section
-        # Add variables to map
-        for var_name, variable in filtered_section.variables.items():
-          filtered._variable_map[var_name] = variable
-    
-    return filtered
-  
-  def get_all_variable_names(self) -> Set[str]:
-    """Get set of all variable names across all sections.
-    
-    Returns:
-        Set of all variable names
-    """
-    return set(self._variable_map.keys())

+ 2 - 2
cli/modules/compose.py

@@ -97,8 +97,8 @@ spec = OrderedDict(
             "default": "traefik",
             "default": "traefik",
           },
           },
           "traefik_host": {
           "traefik_host": {
-            "description": "Domain name for your service",
-            "type": "hostname",
+            "description": "Domain name for your service (e.g., app.example.com)",
+            "type": "str",
           },
           },
           "traefik_entrypoint": {
           "traefik_entrypoint": {
             "description": "HTTP entrypoint (non-TLS)",
             "description": "HTTP entrypoint (non-TLS)",

+ 1 - 1
library/compose/alloy/template.yaml

@@ -24,7 +24,7 @@ spec:
   general:
   general:
     vars:
     vars:
       container_hostname:
       container_hostname:
-        type: hostname
+        type: str
         description: Docker host hostname for container identification
         description: Docker host hostname for container identification
         default: hostname
         default: hostname
         extra: This is needed because when alloy runs in a container, it doesn't know the hostname of the docker host.
         extra: This is needed because when alloy runs in a container, it doesn't know the hostname of the docker host.

+ 1 - 1
library/compose/bind9/template.yaml

@@ -48,7 +48,7 @@ spec:
   general:
   general:
     vars:
     vars:
       bind9_version:
       bind9_version:
-        type: string
+        type: str
         description: BIND9 Docker image tag
         description: BIND9 Docker image tag
         default: "9.20-24.10_edge"
         default: "9.20-24.10_edge"
       domain_name:
       domain_name:

+ 1 - 1
library/compose/checkmk/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       monitoring_version:
       monitoring_version:
-        type: string
+        type: str
         description: Monitoring version
         description: Monitoring version
         default: latest
         default: latest

+ 1 - 1
library/compose/clamav/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       clamav_version:
       clamav_version:
-        type: string
+        type: str
         description: Clamav version
         description: Clamav version
         default: latest
         default: latest

+ 1 - 1
library/compose/dockge/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       dockge_version:
       dockge_version:
-        type: string
+        type: str
         description: Dockge version
         description: Dockge version
         default: latest
         default: latest

+ 0 - 8
library/compose/gitea/template.yaml

@@ -62,11 +62,3 @@ spec:
         description: "SSH port number (should match ports_ssh)"
         description: "SSH port number (should match ports_ssh)"
         type: int
         type: int
         default: 2221
         default: 2221
-      user_uid:
-        description: "User UID for Gitea process"
-        type: int
-        default: 1000
-      user_gid:
-        description: "User GID for Gitea process"
-        type: int
-        default: 1000

+ 1 - 1
library/compose/gitlab-runner/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       gitlab-runner_version:
       gitlab-runner_version:
-        type: string
+        type: str
         description: Gitlab-Runner version
         description: Gitlab-Runner version
         default: latest
         default: latest

+ 3 - 3
library/compose/gitlab/template.yaml

@@ -28,7 +28,7 @@ spec:
       container_name:
       container_name:
         default: "gitlab"
         default: "gitlab"
       external_url:
       external_url:
-        type: string
+        type: str
         description: External URL for GitLab (e.g., https://gitlab.example.com)
         description: External URL for GitLab (e.g., https://gitlab.example.com)
         default: 'https://gitlab.example.com'
         default: 'https://gitlab.example.com'
       ssh_port:
       ssh_port:
@@ -63,11 +63,11 @@ spec:
         description: Enable GitLab Container Registry
         description: Enable GitLab Container Registry
         default: false
         default: false
       registry_external_url:
       registry_external_url:
-        type: string
+        type: str
         description: External URL for Container Registry
         description: External URL for Container Registry
         default: 'https://registry.example.com'
         default: 'https://registry.example.com'
       registry_hostname:
       registry_hostname:
-        type: string
+        type: str
         description: Hostname for Container Registry (when using Traefik)
         description: Hostname for Container Registry (when using Traefik)
         default: registry.example.com
         default: registry.example.com
       registry_port:
       registry_port:

+ 1 - 1
library/compose/heimdall/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       heimdall_version:
       heimdall_version:
-        type: string
+        type: str
         description: Heimdall version
         description: Heimdall version
         default: latest
         default: latest

+ 1 - 1
library/compose/homeassistant/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       homeassistant_version:
       homeassistant_version:
-        type: string
+        type: str
         description: Homeassistant version
         description: Homeassistant version
         default: latest
         default: latest

+ 1 - 1
library/compose/homepage/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       homepage_version:
       homepage_version:
-        type: string
+        type: str
         description: Homepage version
         description: Homepage version
         default: latest
         default: latest

+ 1 - 1
library/compose/influxdb/template.yaml

@@ -52,6 +52,6 @@ spec:
   general:
   general:
     vars:
     vars:
       influxdb_version:
       influxdb_version:
-        type: string
+        type: str
         description: Influxdb version
         description: Influxdb version
         default: latest
         default: latest

+ 1 - 1
library/compose/loki/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       loki_version:
       loki_version:
-        type: string
+        type: str
         description: Loki version
         description: Loki version
         default: latest
         default: latest

+ 1 - 1
library/compose/mariadb/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       volumes_version:
       volumes_version:
-        type: string
+        type: str
         description: Volumes version
         description: Volumes version
         default: latest
         default: latest

+ 1 - 1
library/compose/n8n/template.yaml

@@ -21,6 +21,6 @@ spec:
   general:
   general:
     vars:
     vars:
       n8n_version:
       n8n_version:
-        type: string
+        type: str
         description: N8N version
         description: N8N version
         default: latest
         default: latest

+ 1 - 1
library/compose/nginxproxymanager/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       volumes_version:
       volumes_version:
-        type: string
+        type: str
         description: Volumes version
         description: Volumes version
         default: latest
         default: latest

+ 1 - 1
library/compose/nodeexporter/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       node_exporter_version:
       node_exporter_version:
-        type: string
+        type: str
         description: Node_Exporter version
         description: Node_Exporter version
         default: latest
         default: latest

+ 1 - 1
library/compose/openwebui/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       openwebui_version:
       openwebui_version:
-        type: string
+        type: str
         description: Openwebui version
         description: Openwebui version
         default: latest
         default: latest

+ 1 - 1
library/compose/passbolt/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       volumes_version:
       volumes_version:
-        type: string
+        type: str
         description: Volumes version
         description: Volumes version
         default: latest
         default: latest

+ 1 - 1
library/compose/pihole/template.yaml

@@ -50,6 +50,6 @@ spec:
   general:
   general:
     vars:
     vars:
       pihole_version:
       pihole_version:
-        type: string
+        type: str
         description: Pihole version
         description: Pihole version
         default: latest
         default: latest

+ 1 - 1
library/compose/postgres/template.yaml

@@ -16,7 +16,7 @@ spec:
   general:
   general:
     vars:
     vars:
       postgres_version:
       postgres_version:
-        type: string
+        type: str
         description: PostgreSQL version
         description: PostgreSQL version
         default: latest
         default: latest
       postgres_secrets_enabled:
       postgres_secrets_enabled:

+ 1 - 1
library/compose/prometheus/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       volumes_version:
       volumes_version:
-        type: string
+        type: str
         description: Volumes version
         description: Volumes version
         default: latest
         default: latest

+ 1 - 1
library/compose/promtail/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       promtail_version:
       promtail_version:
-        type: string
+        type: str
         description: Promtail version
         description: Promtail version
         default: latest
         default: latest

+ 1 - 1
library/compose/teleport/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       teleport_version:
       teleport_version:
-        type: string
+        type: str
         description: Teleport version
         description: Teleport version
         default: latest
         default: latest

+ 1 - 1
library/compose/twingate-connector/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       twingate_connector_version:
       twingate_connector_version:
-        type: string
+        type: str
         description: Twingate_Connector version
         description: Twingate_Connector version
         default: latest
         default: latest

+ 1 - 1
library/compose/uptimekuma/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       volumes_version:
       volumes_version:
-        type: string
+        type: str
         description: Volumes version
         description: Volumes version
         default: latest
         default: latest

+ 1 - 1
library/compose/wazuh/template.yaml

@@ -15,6 +15,6 @@ spec:
   general:
   general:
     vars:
     vars:
       wazuh.manager_version:
       wazuh.manager_version:
-        type: string
+        type: str
         description: Wazuh.Manager version
         description: Wazuh.Manager version
         default: latest
         default: latest

+ 1 - 1
library/compose/whoami/template.yaml

@@ -25,6 +25,6 @@ spec:
   general:
   general:
     vars:
     vars:
       whoami_version:
       whoami_version:
-        type: string
+        type: str
         description: Whoami version
         description: Whoami version
         default: latest
         default: latest